Boston Dynamics : Spot robots aren’t evil and won’t be turned into weapons


As videos of robot-like dogs made by Boston Dynamics go viral on the internet, the humanoids’ uncanny abilities have also sparked worries that they could become a threat to humans.

Not so, says their creator Marc Raibert in an interview with AFP at the Lisbon Web Summit, claiming that the Spot robots aren’t evil and won’t be turned into weapons.

The first 1,000 Spot models are to be shipped to customers by the summer of 2020.

US engineering and robotics firm Boston Dynamics, founded in 1992, was in 2013 bought by Google, which sold the firm on to Japan’s Softbank in 2017.

How will buyers use Spot?

“So far the kind of people we’re shipping to and working with are developing applications in construction. It’s a popular goal in construction these days to collect data to measure the progress of the construction site,” Raibert said.

“We’re taking some of those same sensors, and putting them on the robot and having it travel and collect data.

There’s a real opportunity to have robots do that on a more routine basis. The robots can localise the sensors much more precisely than people can.

“Another area that we’re working in, we call it gas and oil, but it’s really any facility that needs to be monitored. We’re doing a little bit of work with what we call public safety.

Police going to a hazmat hazardous environment situation or bomb threat or just something where there’s an unknown package and rather than have a human going to poke at it they’re using the robots to poke at it.”

Could they harm humans?

Raibert explained that robots “can see the people as obstacles and avoid obstacles. But they’re really not designed to work closely with people. We’re not selling these to people who put them in their homes.

“Even in offices, there’s only limited use. We want everything to be safe. I think some of the fear of robots that does exist is not that the robot will make a mistake and bump into something, you know, that’s gonna happen sometimes.

Like car accidents, right?

There’s another fear, which is more a science fiction fear that the robots are going to be so smart that they’ll be angry with us. I don’t think it’s realistic in today’s robots.

“Hollywood has taken things to an extreme place, in what they portray.

The robots are not evil, they don’t have emotions or egos or ambition like people do.

Our current license agreement for Spot says that you’re not allowed to use the robot to harm a person or to intimidate a person.

We don’t want anybody to weaponise them.”

When will you become profitable?

Raibert said Spot has a horizon.

“I won’t tell you what it is, but it’s a reasonable horizon. It’s not a ridiculous horizon. We have a business plan for going into the black, but it’s a few years out. So today it’s Spot. We’re already shipping it.

“The next thing is a logistics robot. We’re working on one called Handle, which is designed to work in warehouses, moving boxes around. It looks like a bird. I think that that is going to be a bigger scale activity, but much narrower focus than Spot.

“And then the future is things like Atlas. I don’t think Atlas will ever be shipped. But what we learned on Atlas will make its way into the other product things eventually.

“Although we make robots to sell, I think our long-term interest is in understanding how it is that people and animals can move in the world with such mobility and dexterity. That’s a grand challenge, a scientific challenge.”

A Chinese university has enlisted teenagers straight from high school to work on a new experimental program aimed at developing artificial intelligence (AI) weapons.

The Beijing Institute of Technology (BIT) group of teenagers included 27 boys and four girls chosen to train as the world’s youngest AI weapons scientists, according to the BIT website.

Those selected for the “experimental program for intelligent weapons systems” were all under the age of 18 and carefully chosen from a list of 5,000 candidates, the BIT website said.

One BIT professor who was involved in the screening process told the South China Morning Post that candidates needed to be more than just a bright student.

“We are looking for other qualities such as creative thinking, willingness to fight, a persistence when facing challenges,” the BIT professor told the Post, preferring to remain anonymous.

“A passion for developing new weapons is a must … and they must also be patriots.”

The inaugural class of BIT's experimental intelligent weapons program poses for a photo.

PHOTO: The inaugural class of BIT’s experimental intelligent weapons program. (Supplied: Beijing Institute of Technology)

The program, which launched on October 28, is the latest move in an international race to utilise AI technology for modern warfare, with the US and China leading the way.

“We are walking a new path, doing things that nobody has done before,” student representative Cui Liyuan said at the launch.

“It sounds like a brag when you say we are leading the modern war trend … but we should be down-to-earth and inherit the spirit of the older generation … who are not afraid of difficulties and hardships.”

According to the Post students on the course will be mentored by two senior weapons scientists, and after completing a semester of course work, be asked to choose a speciality field and be assigned to a relevant defence laboratory for hands-on experience.

Following the four-year course, the students will then be expected to take on a PhD at the university and become China’s next AI weapons leaders, according to the BIT website.

In 2017, Chinese President Xi Jinping explicitly called for a greater national focus on military AI research.

Earlier this year, Chinese scientists said they were developing “giant” AI submarines that can carry out complex missions without on-board human control, ready to deploy by 2020.

China also has the world’s largest testing facility for drone boats and has other projects focused on land and air-based drone weaponry.

In a demonstration of China’s expanding military technology, a Chinese state-owned company announced on Thursday that it was developing a stealth combat drone that could “fly long hours, scout and strike the target when necessary”.

But, despite their ramped up operations, the US still leads the world in the use of drone and AI technology for the military, utilising the expertise of companies Google and Boeing to develop new technology.

According to the US Department of Defence, the US is developing a range of tactical robot strike teams, land based decision making robots, and mass drone ‘swarms’ that could overwhelm enemy bases with the ability to scramble communications.

In September, the US Defence Advanced Research Projects Agency (DARPA) — which is tasked with ensuring the US is never “the victim of strategic technological surprises” — announced a $US2 billion ($2.7 billion) campaign to develop next wave of AI technologies that could be utilised in making, among other things, new age weapons.

Predator drone

PHOTO: Drone technology is one of the main examples of AI at work in defence forces. (US Air Force: Tech Sgt. Effrain Lopez)

The announcement came as thousands of scientists, engineers and entrepreneurs including Elon Musk signed a pledge to not work on entirely autonomous robotic weapons amid growing ethical concerns about the creation of killer robots.


Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.