Inside Artificialis - #11

Nov 30, 2022 10:01 am

image

Hey, . Welcome to the monthly version of the Artificialis newsletter!


A summary of what happened during the month, the best blogs of our Medium publication, latest news of our Discord server events, and recent news in the world of Artificial Intelligence and Machine Learning!


So seat back, take a cup of coffee, let's go over what happened this month.


A little bit from me:

I experimented a little bit with multi-task learning. I recently implemented the paper Real Time Joint Semantic Segmentation & Depth Estimation Using Asymmetric Annotations, the link leads to my website, you'll find the link to the repository there as well!


I didn't have much time to write some cool blog article, the consultancy cliché might be catching up? :D

However, if you ever want to write some cool articles on medium about machine learning, science, AI, etc, hit me up on the Discord server! We'll promote your article and publish it in our publication.



AI in the world

An incredible amount of things happened this month, let's see some of them:


Robots That Write Their Own Code

A common approach used to control robots is to program them with code to detect objects, sequencing commands to move actuators, and feedback loops to specify how the robot should perform a task. While these programs can be expressive, re-programming policies for each new task can be time consuming, and requires domain expertise.

What if when given instructions from people, robots could autonomously write their own code to interact with the world? It turns out that the latest generation of language models, such as PaLM, are capable of complex reasoning and have also been trained on millions of lines of code. [...]


Beyond Tabula Rasa: Reincarnating Reinforcement Learning

Due to the generality of RL, the prevalent trend in RL research is to develop agents that can efficiently learn tabula rasa, that is, from scratch without using previously learned knowledge about the problem. [...]

Here, we propose an alternative approach to RL research, where prior computational work, such as learned models, policies, logged data, etc., is reused or transferred between design iterations of an RL agent or from one agent to another. [...]


Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

In this work, we present a systematic study of adversarial attacks on state-of-the-art object detection frameworks. Using standard detection datasets, we train patterns that suppress the objectness scores produced by a range of commonly used detectors, and ensembles of detectors. Our ultimate goal is to build a wearable “invisibility” cloak that renders the wearer imperceptible to detectors. [...]

image


CICERO: An AI agent that negotiates, persuades, and cooperates with people

Today, we’re announcing a breakthrough toward building AI that has mastered these skills. We’ve built an agent CICERO that is the first AI to achieve human-level performance in the popular strategy game Diplomacy*. CICERO demonstrated this by playing on webDiplomacy.net, an online version of the game, where CICERO achieved more than double the average score of the human players and ranked in the top 10 percent of participants who played more than one game. [...]


Stable Diffusion 2.0 release

Stable Diffusion 2.0 delivers a number of big improvements and features versus the original V1 release, such as: new text-to-image models, super resolution upscaler, depth-to-image, [...]


Why Meta’s latest large language model survived only three days online

On November 15, Meta released the latest in a line of large language models. Galactica had been trained on 48 million examples of scientific documents including articles, textbooks, and encyclopedias. The promise was huge: Galactica was introduced as a model that "can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more." But the model couldn't quite live up to its hype, as some researchers found. After just three days, the Galactica demo was taken down. [...]



Tip of the month

This month's tip is:

You absolutely need to learn Transformers


Transformers are dominating the industry of Machine Learning: we initially saw them in NLP with the Attention is all you need's paper. From there, we've seen them crossing every other domain, from audio to computer vision (Whisper by OpenAI, the Vision Transformer for image classification, DETR for object detection, even segmentation...).


Why?

Because the architecture is based on the Attention's mechanism to develop simpler models, it basically becomes a very versatile differentiable computer, making recurrent connections and convolutions less and less needed.


The attention mechanism might be counter-intuitive at first. I recommend to start with The Illustrated Transformer and the Attention? Attention! blog posts.


When you got an overview, this repository contains quite a bit of useful resources!



'Till next month, you can find everyone here:


Have a fantastic month, !



If you'd like to support our community, and have access to millions of amazing articles and tutorials, consider subscribing to Medium's program via our link, we'll receive a small portion of your fee.

All of the money will be used to sponsor our Discord's events prizes!


Referral


Comments