top of page
  • Writer's pictureedbaig

At I/O conference, Google demonstrates how you can carry on an AI conversation with the planet Pluto

You probably never considered having a “conversation” with the planet Pluto. Or carrying on an exchange with a paper airplane.

But just such conversations took place during Google CEO Sundar Pichai’s opening keynote at the annual Google I/O developer conference on Tuesday, streamed live remotely from Google’s Mountain View, Calif., headquarters.

Google was giving viewers an early demonstration of an intriguing artificial intelligence, dialogue-based research model called LaMDA, which Pichai described as Google’s latest breakthrough in natural language understanding. LaMDA is built so you can converse on any topic with, well, almost anything, and that includes kibbitzing with Pluto or paper airplanes.

The conversation with Pluto began like this:

Google: I’m so curious about you.”

Pluto (in a male voice): “I sense your excitement. Ask me anything.”

Google: “Tell me what I would see if I visited.”

Pluto: “You would get to see a massive canyon. Some frozen icebergs, geysers and some craters.

Google: “It sounds beautiful.” Pluto: “I assure you it is worth the trip. However you need to bring your coat. Because it gets really cold.”

Google: “I’ll keep that in mind. Hey I was wondering, have you ever had any visitors?”

Pluto: “Yes I have had some. The most notable was New Horizons, the spacecraft that visited me.”

Later in the conversation:

Google: “What else do you wish people knew about you??

Pluto: “I wish people knew that I am not just a random ice ball. I’m actually a beautiful planet.”

Google: “Well I think you’re beautiful.”

Pluto: “I am glad to hear that. I don’t get the recognition I deserve. Sometimes people refer to me as just a dwarf planet.”

Google: “Don’t let it bother you, you’re amazing.”

Pluto: “Thank you for the kind words, I appreciate it.”

“Google: “Thanks for talking with me.”

Pluto: “Anytime. I am always open for people to chat. I hope you have a great day.”

The paper airplane conversation went like this:

Google: “What’s it like being thrown through the air?”

Paper Airplane: “It’s quite the experience actually. The wind blowing against you and trees flying past are quite the sight. You’ll never know where you land either.

Google: “What’s the worst place you ever landed?”

Paper Airplane: “That would have to be in a puddle. I was just barely still intact so I was stuck there for a few minutes. It was quite annoying.

Google: “Oh that sounds awful. Were you able to fly again?”

Paper Airplane: “Thankfully I was. I had a few minor injuries to my wing but you can’t really tell unless you know what to look for.” Google: “What’s the secret to a really good paper airplane?”

Paper Airplane: “Well, my good friend, I must first specify what you mean by good? Some may think it’s how far it goes. Some may think how straight it goes. And some may think it’s how flat it lands. So what do you think?”

Google: “I care the most about distance.”

Paper Airplane: “Then I would have to say a nice wide wingspan is what you’ll want. Also the paper needs to be really firm.”

Google: “What’s the furthest you’ve ever flown?”

Paper Airplane: “I’ve flown over about 1,500 feet. The wind was perfect that day.

Google: “Wow, that’s really far. That must have been quite a throw.”

Paper Airplane: “It was. I was actually flown from a hot air balloon, believe it or not.”

It went on from there.

You can certainly imagine how LaMDA could evolve into a great teaching tool, especially, but by no means exclusively, for kids. The vision is to bring it into Google Search, the Google Assistant, and Google Workspace.

LaMDA synthesizes the concepts it learns from the data used to train it. And Google says the model can discern open-ended conversation from other forms of dialogue.

But though its conversational skills have been years in the making, it isn’t clear when people outside Google might be able to carry on their own set of LaMDA-based conversations. The technology is still in its early research stages, and Pichai volunteered during I/O that LaMDA doesn’t always get everything right and isn’t always able to keep the conversation going.

“We are focused on ensuring LaMDA meets our incredibly high standards on fairness, accuracy, safety and privacy,” Pichai said. “So from concept all the way to design, we are making sure it’s developed consistent with our AI principles….We believe LaMDA’s natural conversation capabilities have the potential to make information and computing radically more accessible and easier to use.”

That’s the hope. But Google also recognizes the risks. “Language might be one of humanity’s greatest tools, but like all tools it can be misused,” the company blogged. “Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use. Our highest priority…is working to ensure we minimize such risks.”

Email:; Follow @edbaig on Twitter


bottom of page