Those paying attention to what’s going on in the cyborg sector of tech know that substantial progress continues to be made in the effort to fuse wetware and hardware. Scientists, for example, can make shocking things happen with monkeys and brain-machine interfaces. And now, in another step toward crumbling the barrier between people and their computerized counterparts, there is Conduct-a-Bot: a suite of wearable sensors that allows a user to control drones with gestures.

In the above video, which comes via Futurism, Conduct-a-Bot is used by Joseph DelPreto to pilot a drone through a series of hoops. DelPreto is a PhD student in the Distributed Robotics Lab at MIT, and he and Daniela Rus, the Director of the MIT Computer Science and Artificial Intelligence Laboratory ( CSAIL), published the research behind Conduct-a-Bot in Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction.

The researchers say that Conduct-a-Bot works using a combination of sensors and neural nets (or machine learning algorithms), that allow a connected drone to be controlled remotely with gestures. According to the CSAIL press release for the Conduct-a-Bot study, the current system can detect eight predefined navigational gestures without any calibration or training data. Meaning that anybody who puts on the suite of wearable sensors can control the connected drone immediately, without any kind of fine tuning required to accommodate a new user—this is why the researchers refer to the tech as “plug-and-play.”

New Tech Lets You Control Drones With Gestures_1

MITCSAIL

The gestures Conduct-a-Bot uses require basic hand and arm motions that most people can make. For example, when a user clenches their fist, that is a command for the drone to stop; when a user rotates their hand, that is a signal for the drone to turn. The gestures are detected using wearable muscle and motions sensors, and the neural networks are deployed to analyze the inputs from those sensors and categorize them into the distinctive gesture categories.

This means that while various users will all, say, clench their fists differently, thanks to Conduct-a-Bot’s machine learning algorithms, anybody’s individual version of “clenching” can still be deciphered as such.

MITCSAIL

“By using a small number of wearable sensors and plug-and-play algorithms, the system aims to start reducing the barrier to casual users interacting with robots,” the CSAIL press release says. The release adds that this kind of technology “builds an expandable vocabulary for communicating with a robot assistant or other electronic devices in a more natural way.”

What do you think of Conduct-a-Bot? Are you ready to control mad drone swarms with basic gestures, or would you prefer there be more safety mechanisms in place before this sci-fi activity becomes more prevalent? Let us know your thoughts in the comments!

Feature image: MITCSAIL