Inside Artificialis - #14

Mar 31, 2023 8:01 am

image

Hey, . Welcome to the monthly version of the Artificialis newsletter!


A summary of what happened during the month, the best blogs of our Medium publication, latest news of our Discord server events, and recent news in the world of Artificial Intelligence and Machine Learning!


So seat back, take a cup of coffee, let's go over what happened this month.


A little bit from me and the server:

This February, we introduced two main things:

  • Visionary is officially out of its beta! Currently in 2.1, Visionary is now supporting only slash commands, file uploading instead of URLs for Hydra and the other commands, and GPT-4 is fully integrated in the /mentor command!
  • *We introduced the Distinguished role: this will be given to exceptional people who constantly help others in the server and who's active in the community (it can be here in the server, or our medium publication writers) and it'll come with perks (access to special Visionary's commands, more privileges, giveaways, etc)


From our medium publication:


AI in the world

I can't even begin to realize the amount of things that happened during this month, let's go over the most important ones:


OpenAI released GPT-4

At this point, you probably already saw it, but let's go over its main points:

  • Multimodality: the model can now understand both text and images
  • Up to 32k context tokens
  • Deployed on ChatGPT plus and API, while the image input capabilities are in closed alpha
  • GPT-4 is beating and outperforming state of the art on multiple benchmarks, and it's scoring in the top 10% of many real world exams!


My two cents: GPT-4 is incredible, but the real innovation will come with the wave of amazing products people will build on top of it.

On that note, Microsoft announced GPT-4 and DALL-E are going to be integrated into Bing, Microsoft 365 and Image Creator!


Meta unveils a new large language model that can run on a single GPU

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla70B and PaLM-540B. We release all our models to the research community.


Google is bringing Generative AI to Google Cloud!

If you want to build, customize models and apps, you can now access Google's models like PaLM, their best language model!

Moreover, we'll see:


- Generative AI support in Vertex AI

- Generative AI App Builder

- New AI partnerships and programs


Google Workspace will also meet some new interesting features. For example, In Gmail and Google Docs, you'll be able to simply type in a topic you’d like to write about, and a draft will be instantly generated for you!


Here's a couple of other very interesting news!


AssemblyAI released Conformer-1 - new state of the art in Speech Recognition tasks!

The conformer's architecture was introduced by the Google Brain's team 3 years ago.

It combines Transformers (their ease of training and the ability to capture global interactions) with CNNs (their ability to exploit local features)

The Conformer can efficiently model an audio sequence's local and global dependencies. (best of both worlds)

Moreover, AssemblyAI also used DeepMind's scaling laws (the ones to efficiently train LLMs) and trained on a 650k hours dataset!

The result?

Conformer-1 is more robust on real-world data than popular commercially available models.

For example, Conformer-1 makes 43% fewer errors on noisy data than Whisper!


Google is able to identify and detect systemic biomarkers in external eye photos!

Using a simple logistic regression model as baseline, they showed that a number of systemic biomarkers (spanning several organs like kidney, liver, etc) can be predicted just from an external eye photo.

This means it has a lot of potential to increase access to disease detection and monitoring without invasive techniques!


Pause Giant AI Experiments: An Open Letter

Thousands of people (including Elon Musk, Yoshua Bengio, Gary Marcus and many more), signed this letter to pause the training of ever bigger LLMs. Curious on what do you think about it!


Tip of the month

Recently, Stanford University released Alpaca, an instruction fine-tuned model based on Meta's LLamA, they were able to re-create a ChatGPT-like model and it only costed 500$ for data generation + training!


Check out article and dataset!



'Till next month, you can find everyone here:


Have a fantastic month, !



If you'd like to support our community, and have access to millions of amazing articles and tutorials, consider subscribing to Medium's program via our link, we'll receive a small portion of your fee.

All of the money will be used to sponsor our Discord's events prizes!


Referral


Comments