Not quite human: the future of intelligence

By
Not quite human: the future of intelligence

We sat down with Real Thing's CEO, Nick Howden, to interview him about AI and its future development.

Chappie, The Terminator, The Matrix; these films and more are among the most famous examples in our lives that have predictions about the outcome of future advancements in science, namely in AI research and robotics. Given the recent release of Chappie on Blu-ray, Nick Howden, CEO of Real Thing, decided to have a chat with us about where AI might be going.

What problems could we see arising from software upgrading itself; essentially learning and evolving independently from human control?

There’s a concept called the singularity; it’s when machines can replicate themselves, and create machines that are smarter and better than themselves. Once you’ve done that, you need to make sure that the machines are friendly, or at least, not apathetic to humanity. But there’s also a positive scenario: where we get all these fantastically intelligent entities that have everyone’s best interest at heart, and invent/create fantastic machines which take over the mundane aspects of our lives, and help make  a utopian world. On the downside, there’s the Terminator/Skynet scenario, but they don’t have to be evil or harmful to humans; merely indifference, and as a side-effect of their existence that the changes they make wipe us out.

And how would one guard against such a…poor outcome?

There’s been a few different ways proposed; Asimov’s three laws, Directives, or there’s the Luddite view which would be to ban all machines.

Not the most interesting outcome I could foresee.

No, no it’s not the most interesting! It’s not something we would advocate for; I personally wouldn’t. Aside from measures of control, give these machines a good education, and hope they can reason what a good way to live is.

Considering the fundamental differences between a computer and humanity, could we ever have an aligned set of goals where we could work together in harmony?

I do; there are lots of different views of intelligence, some people believe there’s something unique to biological intelligence – in my view, computers can be intelligent, and will probably be created in a way that’s modeled on human intelligence– I believe there’d be enough of an overlap. The way we define intelligence is ‘humanity’; we’re still trying to understand our brains, and one way to do that is to create intelligences, and to do that, we need to understand our brains. Since these aspects feed off each other, the kind of intelligence we create will be modeled off ours.

To go about creating intelligence, would someone start with software, hardware, or are both just as important?

I’m a software guy, so for me, software is about the mind, which is what Real Thing has been working on. We focus on interactive intelligence, products which can talk to you, carry a conversation; that kind of spoken interaction is key to what we’re building on. Embodiment of intelligence is just as important; the inputs and outputs the mind gets from the body are what make it exist. Like in Chappie: the robot begins in a single embodiment, till later where it has the ability to back itself up, or back up a human, and copy or move a consciousness, which leads to all sorts of amazing steps.

Like brain uploading?  

Yeah, that’s a bit further down the track. We’d have to first create a sentient entity, like Chappie, and figure out how to back up its mind, before we could do the same. This is a human concern; we’re stuck in a single body, while software, which isn’t, can copy itself around the internet, reproducing itself instantaneously, becoming a totally different beast to us and growing at a rapider rate.

Like a kind of mental virus?

Exactly! It has this ability to reproduce rapidly, which would probably happen before humans have the ability to back themselves up and upload.

Wouldn’t we need a heavy-duty quantum computer?

Potentially. We’ve created pretty intelligent stuff based on what we’ve got; computers have shown themselves to be superior at chess, calculations, however their ability to carry conversation, jump between conversations, and use multi-threaded logic is not quite there yet. I don’t think we need a real breakthrough in hardware. With Moore’s law, the constant advances in processing power and so on, will mean that the current sort of computer infrastructure will be able to produce something reasonably intelligent within the next 20-30 years.

Given scientific advancements in cyborgs and biocomputers; hybrots, computers made of neural-wire mixes, do you think these methods are the future, or are they overestimated?

I think the first AI will be software only. Biological Computer Interfaces will continue to evolve and be extremely important; to be able to backup your brain you’d have to interface with a computer first. Or even just enhancing your brain. Imagine being able to enhance your memory, or your mathematical calculating speed, and be able to do things in the background while you’re talking and thinking. Everyone would want one. Well, maybe not everyone, but I would! Those sorts of things will be real drivers, but I don’t think they’re the path to intelligence.

So a ‘wetware CPU’ isn’t the way to go?

 I think there may be elements of that in future computing, but it won’t be the first thing that comes through, but I think the first AIs will be software based.

What can you tell me about non-malefincence codes?

There're institutes that focus on friendly AI, some funded by Elon Musk. It’s important that all the institutes working on AI build that into their code.

Does that ever come into what your company does, or is that too far ‘down the track’?

To be honest, no. The important thing is to know where we are; we won’t have AI in the next two years. But given Moore’s law and computer advancement, we’ll probably see this in the next 20-40 years.

Given the way software has been used and misused, from tracking people, understanding how people think, are we the best people to manage these sorts of machines? Who are we to say if we could create or responsibly control AI?

As I alluded to earlier is that by creating something more intelligent, they might be able to lead us to using AI in a more responsible way and help us solve human problems. There’s also the fact that AI isn’t an overall entity; we have to look at it as individuals if it ever does exist, and all sorts of moral debates around that: do you have a responsibility to protect it or keep it alive. Then we enter into moral debates, are they alive, how do we treat them? Like in Chappie, the robot’s creator loves him like a child and thinks he has his best interest at heart. Then there’s humans who hate AI, or even humans who treated him like he was another human, which is the most honest; seeing where AIs fit into our lives and getting on with it.

Instead of moving to human-level consciousnesses, why not limit individuals to drone-level intelligences to making them more easily controlled?

That’s an interesting method of control, unlike the singularity, which is a loss of control. But even if we had human-level AI, we’d have other AI with levels of intelligence related to their roles; and not having them, say, get bored at work. But once we make companies that self-replicate, we end up losing that goal.

So is your company’s end-goal to create software that can carry an end-goal?

Yes. Our software’s been described as ‘Siri on steroids’. Our end goal is something that can carry a conversation, learn from you, follow the topics or know what you’re talking about if you have an aside in a multi-threaded conversation. Right now, we make products for people who are blind or have low-vision to read for them and find podcasts using spoken interaction. We’re working on systems for the aged to help them interact with their environments via voice-control; adjust the blinds, etc. We also have workflow systems for staff members, reminding them who has been cared for, who they have to see, and other reminders, all to raise the level of aged care via voice interaction. 

Can these machines understand a conversation or do they just see the parts of sentences?

Well, unlike say, ALICE, and chatboxes, which do a great job at chatting, we’re beyond that at making software which knows what key elements are, and have a larger amount of knowledge to work with. These machines can be personalized, for instance, giving a sailor some software which has nautical terms.

Have your machines are far off from exceeding human conversational ability?

Yes. Human conversational ability is robust; you can ask again and make mental leaps. While we have machines that can guess – which is much better than repeated calls for clarification – we’re some ways off.

Could a machine exceed all areas of performance?

It’s definitely possible. I see it happening within my lifetime, probably the next 20-30 years, which means it’s close enough that we need to plan and worry about it?

What kind of place would humans have in a world where we are redundant?

Hopefully we don’t create something that sees us in that light! But if we do, we’re not long for this world. Better scenarios would be machines that can perform tasks that take away the need for us to work for a living, letting us focus on art, culture, and live in a kind of utopia.

That sounds unlikely…

Well, culturally, we tend to derive negative predictions of robots and robotics, while others, like Japanese culture, have a positive view of robots and see them as a good aspect of society, say, as companions. It’s our cultural lens that sees things in the negative, and sees us being controlled.

How far are we from truly self aware AI?

Well, how do we know when we’ve gotten there? Philosophically, who knows anyone else is self-aware, which is what Solipsism goes into. All we can test is how it acts and interacts with its environment. Who knows, is one answer? From what I know, five years is too soon. That said, more intelligent machines will create smarter machines faster.

Like evolution in fast-forward.

Exactly. It has implications in so many areas – presuming these intelligences are a source for good – they could help us cure cancer, and open up all other kinds of possibilities.

So you’d say we should research into AI creation?

Yeah, absolutely. I’m certain there’ll be something positive to come out of it. That’s not to say we shouldn’t manage things, be aware of safeguards to have, and other things that can come from AI, but I think AI research has more positive benefits than the negative.

If we had to compare AI to a living organism, where would we be?

Well, there are multiple brains behind what we do. With machines, we’re at the ‘reptilian brain’ level, with machines that can balance goals, say, pursuing food, reproduction, avoiding danger. In some perspectives, AI is ahead, while in other senses, we’ve got a way to go; we still need to improve things like high-level human consciousness and thinking. We still have some work to do, but some of the pieces are already in place.

 

 

Copyright © PC & Tech Authority. All rights reserved.
Tags:

Most Read Articles

Upgrading to Windows 10 is still free, if you use this loophole

Upgrading to Windows 10 is still free, if you use this loophole

What's new in iOS 11?

What's new in iOS 11?

Skylake-X and Kaby Lake X: The Core Wars

Skylake-X and Kaby Lake X: The Core Wars

Review: Dell XPS 15 laptop (2017 model)

Review: Dell XPS 15 laptop (2017 model)