Community focus: steer this robot with your mind

This innovator is building an assistive robot using NextMind tech. Follow along and join forces on Github!

Fergus Kidd is an engineer passionate about electronics, the internet of things, and robotics. He is part of the Emerging Technology team at Avanade where he explores all kinds of new technologies to advise clients on emerging tech, and how they may impact decisions being made today to anticipate and deliver the future

Today he’s introducing us to an ongoing robotics project he is leading where you can use your mind to control a robot.  Read on to hear from Fergus:

We received the Stretch ‘RE1’ from Hello Robot in February 2021 and we’ve been experimenting with it ever since, around other projects and topics of interest that we have going on. 

Stretch RE1 is a mobile manipulator, with a sturdy maneuverable base, and an arm capable of lifting and reaching using a tool head. The Stretch has a load of cool sensors and features just waiting to be used in some innovative ways, powered by ROS – the Robotic Operating System.  

The goal initially is very broad, as we really wanted to test new technologies out on a robotic platform to see what we can do. Currently, this comes down to three areas: 

  1. Remote operation through Rocos and their cloud platform 
  2. Artificial intelligence for autonomous interaction with the environment around the robot
  3. Next-generation control surfaces using NextMind.   

We call the robot Rory, but maybe it’s best if he introduces himself? 

What can Rory do?

Stretch has some out-of-the-box functionality provided by Hello Robot, especially around vision and object identification. Some of these demos even focus on simple assistive scenarios such as mouth finding for assistive feeding.  

For our Stretch specifically – Rory – we’ve been working to implement Azure Cognitive Services to increase its intelligence, giving Rory the same capabilities that humans have. Currently, Rory can run a full voice interactive chatbot, reply with generated speech, and understand lots of commands from simple questions like ‘what is your name’ to full voice-controlled maneuverability.  

Rory also has a great camera system that we have enabled through Azure to recognize objects, people, faces, and measure distances using the D435i depth sensor. Rory can use all this visual information to describe what he can see and make decisions about objects and their location.  

Above this we’ve been experimenting with NextMind as a control input, to allow for complete control of the Robot’s movements with brain control. This makes for a super cool demo, but in practice, we’d need to fuse this work with additional automated functionality to fully realize Rory’s potential. 

Going forward, we’d love to see a more accessible version of our interface, with a camera feed or other live metrics to help the user more, as well as features like a ‘fine control mode’ for more delicate manipulation or a NextMind control over different tool heads for lots more functionality. We’d welcome additional input and feedback from the community for more ideas. We plan to get more insight from potential users too.

“I love seeing technologies that wow people when they interact with them, and to see ideas and use cases rush through people’s head as they imagine what’s possible with these ground-breaking innovations.  NextMind is certainly one of these wow technologies.” 

Fergus Kidd

How did you integrate NextMind?

It’s quite simple really, Rory is running a local python server that accepts commands. When a command is received, the robot moves in that direction, for example “forward.”

On the NextMind side, we have a simple Unity interface with NeuroTags controls. Using the OnTriggered event we start a timer, and OnRelease we call a script that sends a command to the python server via REST where the amount to move is proportional to the time the focus was maintained.

This results in a way to control the movement of the robot more precisely. The ratio of time to movement can be adjusted for more fine control of the robot too, eventually we’d like to have that as an option to change through the interface. 

I had no Unity development experience at all before starting this project. The NextMind SDK samples were incredibly easy to get up and running and proved very helpful in development. I actually had the whole thing up and running from the tutorial alone by simply replicating the objects and adding more NeuroTags for more controls.

When playing around with new technologies, things rarely go as planned the first time, so I was amazed at how quickly I was up and running with a working interface and device. The ability to simulate the devices as well as focus within the Unity SDK has also been a valuable tool and saved lots of time whilst testing the functionality of each movement function on the stretch. 

When playing around with new technologies, things rarely go as planned the first time, so I was amazed at how quickly I was up and running with a working interface and device.”

Fergus Kidd

What are the potential applications for the robot?

The Stretch robot’s open-source approach and interchangeable tools make it a versatile and adaptable platform. Specifically, with NextMind control the obvious applications are as an assistive device for people with challenges like fine motor control.

There may be other applications that use traditional control surfaces as well as the NextMind too, for example using the NextMind to control larger movements whilst simultaneously using a control surface to control fine manipulation.

The Stretch also comes with an interchangeable tool head, with an open-source design. This means that you can build and create anything that you can think of, to add tons of functionality to Rory. Currently we’ve played around with the gripper and the pen tool, but other examples we could add in the future may include magnetic grabbers, tactile grips, or sensor probes. 

How can we follow along with Rory’s progress?

Open Innovation – generating and innovating with parties outside of our organization, such as start-ups, academia, open source communities, and foundations – is another way to generate and create ideas and solutions that we’d never otherwise have thought of. It’s early days for Avanade in this space, to be honest with you, but we see this as a strong focus for Avanade and a theme that we’re continuing to apply and adopt in our innovation projects. 

We’d love for the NextMind community to join us in our development. Everything we are doing is open source. We appreciate not many people will have access to a Stretch, but the server approach means that it’s easy to test functionality on other physical devices (particularly those that support ROS), or even build a simulated Stretch. Specifically, we’d love community help on the user interface design. To improve accessibility, as well as help getting a camera feed displayed in the centre of the unity controls to make it easier to see what is going on from a distance. 

You can follow our progress on the Avanade GitHub.

We’ll also be posting our progress as we go on our Avanade techs and specs blog.

I also hope we’ll be sharing more with you here in the future! Feel free to jump into the GitHub, review our community standards, and then get started with any open issue. 

How are you using the NextMind Dev Kit? Tell us in the comments for a chance to be featured!

You might like...