Inside Artificialis - #13

Feb 28, 2023 10:22 am

image

Hey, . Welcome to the monthly version of the Artificialis newsletter!


A summary of what happened during the month, the best blogs of our Medium publication, latest news of our Discord server events, and recent news in the world of Artificial Intelligence and Machine Learning!


So seat back, take a cup of coffee, let's go over what happened this month.


A little bit from me and the server:

This February, we introduced two main things:

  • Our custom Discord bot, Visionary, has now entered its beta, only supporting slash commands. There currently is a GPT-3 command (in beta testing) that is going to act as a Machine Learning tutor for our Distinguished* people!.
  • *We introduced the Distinguished role: this will be given to exceptional people who constantly help others in the server and who's active in the community (it can be here in the server, or our medium publication writers) and it'll come with perks (access to special Visionary's commands, more privileges, giveaways, etc)


A very big shoutout also to beta-tensor, who shared in the showcase channel an amazing project he did, here's the youtube link. He basically created a musical chairs game using reinforcement learning!


image


From our medium publication:


AI in the world

Google releases a demo for MusicLM

An AI model capable of generating music from text. Think about DALL-E but for music. You can even start with an image and generate a soundtrack for it! To be honest the quality is not always great, but it’s for sure an impressive beginning


Introducing LLaMA: A foundational, 65-billion-parameter large language model

As part of Meta’s commitment to open science, today we are publicly releasing LLaMA (Large Language Model Meta AI), a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field. [...]


Suppressing quantum errors by scaling a surface code logical qubit

The challenge is that qubits are so sensitive that even stray light can cause calculation errors — and the problem worsens as quantum computers grow. This has significant consequences, since the best quantum algorithms that we know for running useful applications require the error rates of our qubits to be far lower than we have today. To bridge this gap, we will need quantum error correction. Quantum error correction protects information by encoding it across multiple physical qubits to form a “logical qubit,” and is believed to be the only way to produce a large-scale quantum computer with error rates low enough for useful calculations. [...]


2023 will be an open war between tech giants, as the race for chat bots heats up

Microsoft held an event to show off its cool integrations of ChatGPT with Bing (its search engine) and Edge (its browser).

Just one day before Microsoft’s announcement, Google had announced its own answer to ChatGPT: a new powerful AI called Bard. Google published a fancy blog post, with references to their great history of AI innovation, saying that the underlying technology powering ChatGPT was actually invented by them in 2017 (it’s called a “Transformer”).

Too bad that in the demo they showed there was a mistake.

In Google’s demo, Bard answered the question "What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year-old about?" with some bullet points. The last point said that the JWST took the "very first pictures" of an exoplanet outside our solar system. Although the first image of an exoplanet was instead taken by the European Southern Observatory's Very Large Telescope in 2004, according to NASA.

This error caused their stock to lose 7% in market cap (around $100B).

Turns out, Microsoft's product has a major flaw as well:

it's able to argue with users, threaten them, lie to, and much more if "threatened".

We'll see how all of this will turn out.



Tip of the month

Today, I want to share Cog: an open-source tool that lets you package machine learning models in a standard, production-ready container. From their highlights:

  • Docker containers without pain: write a simple configuration file to generate Docker images with all the best practices (nvidia images, caching, dependencies, environment variables,...)
  • CUDA automatic setup
  • Define the inputs and outputs for your model with standard Python. Then, Cog generates an OpenAPI schema and validates the inputs and outputs with Pydantic.
  • Automatic HTTP prediction server using FastAPI
  • Cloud storage
  • And more!

Check it out on --> https://github.com/replicate/cog



'Till next month, you can find everyone here:


Have a fantastic month, !



If you'd like to support our community, and have access to millions of amazing articles and tutorials, consider subscribing to Medium's program via our link, we'll receive a small portion of your fee.

All of the money will be used to sponsor our Discord's events prizes!


Referral


Comments
avatar SAP HR Training
Thanks for the informative article. Unogeeks is the top SAP HR Training Institute, which provides the best <a href="https://unogeeks.com/sap-hr-training" > SAP HR Training </a>