The Jibo SDK: Reaching out Beyond the Screen

Jonathan Ross
May 23, 2016

Almost four years ago I received a call. It was one of those phone calls you might wait your entire career for but never get. It was from Andrew Rapo, my former boss at Disney. (I had actually met Andrew during chance encounter almost ten years ago when I attended a conference on the wrong day-- a lucky mistake that changed the course of my career.) The reception was shoddy and I was on my way to a meeting, but I heard just enough to make out, “I can’t tell you who I met, but it’s a certain ‘celebrity roboticist’ from MIT. She’s starting a company and needs a…” And without hesitation I said,

“It’s Cynthia Breazeal, and I’m in!”

Right from the beginning we knew we wanted to make robotics programming as accessible as possible, and we set out to do exactly that. Now, just a few years later, we’re releasing the SDK to the general public, and I couldn’t be prouder of the hard work and effort the team here at Jibo, Inc. has put into this. We’ve worked hard to stick to our mission of bringing the complex multi-discipline world of robotics, building for a character, and multimodal interaction to all developers.
 

Built for Expression

My background is Electrical Computer Engineering, but I went into software development because I feel that it is the most powerful tool for being expressive. Most of my career has been in online games. I love games because you get to create your own world with your own rules and your own characters, but there was always a yearning to reach out beyond the screen and be part of the real world. I would daydream about all the projects I’d create if I could build an app that could interact with people in a more natural way, through voice and expression rather than poking and pushing buttons. Pure software projects with thriving developer communities like Arduino and Raspberry Pi have done amazing things to bring robotics and embedded development to developers like me, but even with all the support and available knowledge out there, projects always ended up being about the mechanism of how to do something simple, like move a bunch of servos or detect a face in an image. It always felt like I lost track of the vision of the project in the complexity of it.

This is why I’m so excited to be releasing the Jibo SDK. Moving motors is easy with the animation editor and kinematics system. We’ve built a perception system into Jibo himself that streams how he sees the world directly to a skill (robot application) and allows Jibo to break the fourth wall through the screen. This is the first time I’ve had a system that allows me to interact with the real world while giving me enough tools to actually be creative and expressive.
 

It’s All in the Language

"A programming language is low level when its programs require attention to the irrelevant."
- Alan J. Perlis.

One of the first (and maybe most controversial) things you’ll notice in the Jibo SDK is that skills are built in JavaScript. We put a lot of thought into this one. While C/C++ has traditionally been associated with robotics, and while it’s faster than any scripting language, it’s not the best language for creating expressive and interactive experiences. We also looked at building Jibo on Java and the Android platform. In fact, our first prototype was built on Android, and while the name was almost too good to be true, we encountered technical difficulties, such as lack of support for hard float.

Then there was JavaScript, a language we believed to have many advantages. The barrier to entry is lower than with C/C++. It’s cross-platform and cross architecture. It’s one of the fastest-growing languages, has a huge ecosystem supporting it (there’s an npm module for almost anything), and it runs everywhere from the web to mobile to the cloud. We built all of the computationally-intensive parts of our platform in C++ and provide access to them through JavaScript APIs. This includes the animation and kinematics systems, computer vision, audio speech recognition, voice identification, natural language understanding, text-to-speech, and audio source segmentation and localization. Moreover, JavaScript excels at graphics and rendering, application logic, and networked communication, all of which make this a strong choice as a language built on a platform.
 

Why Behavior Trees?

I’m often asked why I chose to build the SDK with behavior trees instead of finite state machines. The answer is that behavior trees are a far more expressive tool to model behavior and control flow of autonomous agents. They are popular in the robotics and video game industries for their ability to coordinate concurrent actions and decision-making processes. Unlike state machines (where there is a single active state at any given time) behavior trees can run multiple behaviors in parallel. This makes them very powerful tools for coordinating all of Jibo's sensory input with his expressive output. We will, however, be releasing a graphical Flow Editor in the coming months for high-level dialog management.
 

So What’s Next?

We’re releasing the SDK in Beta. My developer instinct knows that I’d rather have something earlier in Beta than something more polished later! This means there are several exciting features coming to look forward to. This public release is focused on the robotics aspect of a skill, but in the coming releases you’ll see more emphasis on Jibo as a platform. Again, our goal is to provide a platform that allows developers to concentrate on their skills instead of the complexities of robotics. Check out our SDK Components video and our API docs to see exactly what’s possible.
 

The Jibo SDK Community

Our SDK is a community tool. We value feedback and will work to continuously improve it. We’ve set up an active discussion forum, moderated by our very talented support team. We hope that you’ll participate, ask questions, answer questions, and give feedback. We want to build a community of developers who are as excited about developing for Jibo as we are.

Thank you so much for your interest in Jibo and our SDK. To get started, see our Installation guide. We can’t wait to see what you build.

 
Jonathan Ross
Head of SDK

Jonathan Ross combines his love of AI, tooling, art and animation pipelines, and system integration at Jibo, Inc., taking the tricks of the video game industry and applying them to the arena of social robotics.  He has built a team of talented engineers, who are responsible for developing the Jibo SDK, which enables developers to create complex and richly interactive skills (robot applications). He has a background in electrical computer engineering and computational neuroscience.  His career started in e-learning, quickly transitioning to video games.  Jonathan was a lead engineer at Disney on World of Cars Online, a fully immersive 3D virtual world for kids.  He rolled his own custom physics engine, implemented multiplayer racing, and architected its AI scripting system.  While at Disney, he was also part of a toys-of-the-future think tank called ToyMorrow, exploring how toys could become more connected.  Prior to joining Jibo, Inc., Jonathan worked at Zynga, where he worked on and built some of Zynga’s largest and most popular social games including Café World, CityVille, and ChefVille and built Zynga’s UI and localization system, used on many of Zynga’s games.