Bill Gates Says AI Will Be As Dangerous as Nukes

a_skeleton_03

<Banned>
29,948
29,762
Absurd that you think a program can't have randomness inherent to it, or a process so complicated that at the end of the day it feels random.
Absurd that you will compare fake preprogrammed randomness with a word like intelligence. It is no different from MS Word at that point.
 

a_skeleton_03

<Banned>
29,948
29,762
machine intelligence is more fun when it's in a mobile robot.
You mean more fun to be ignorant of the reality of AI and just pretend you have any thoughts or opinion of it at all while trying to troll.

I love how it has to be "machine intelligence" for your mind to wrap around it. Read a book or go back to failing at MMO's, you are bringing nothing to this topic but ignorance.
 

Void

Experiencer
<Gold Donor>
9,483
11,200
But not a logical and defined reaction. Completely different makeup in every single human. I am not going to argue about, nor did bring up souls here. Only you and Tuco are trying to derail this. You cannot program in the randomness of emotion into an AI. Nothing to do with souls at all.
I'm going to admit that I apparently don't have as much knowledge on this subject as others, but I'm confused on one point, which is illustrated quite well in this post.

You're making a terrible assumption right from the outset here, and I think it is clouding every other opinion you give. Why does an Artificial Intelligence of the future need to be programmed the way we program now? You are assuming that one *must* be programmed, with an actual set of if-then statement, and thus any output is simply determined by that program, no matter how complex and convoluted and random it might appear. Given the same input, it would output exactly the same thing every time, unless it really had a "flip a coin" step.

Why can't an AI of the future be something other than a series of 1s and 0s going through a bunch of circuits? We can't possibly imagine all the technologies created in the future, so why can't one of them be an entirely different kind of "program" that doesn't involve if-then or on-off states? This is no different than applying human understanding to possible alien cultures; we simply cannot grasp it at this time, but it doesn't mean that it can't exist.

If you limit yourself to an AI that must have a program written by someone else, then a_skeleton_03 might be right. But if you allow for future technology to surpass our current limits, who knows what might be possible? We've made tremendous strides in the last hundred years, what happens if we get a thousand more without destroying ourselves? Or a million? Sure, you could then say, "Well, that means *anything* is possible in the future and it is pointless to argue" but you're missing the point. You are saying there is zero chance, but you're making that assumption from the standpoint of current technology, and that's just not fair.
 

a_skeleton_03

<Banned>
29,948
29,762
If you limit yourself to an AI that must have a program written by someone else, then a_skeleton_03 might be right. But if you allow for future technology to surpass our current limits, who knows what might be possible? We've made tremendous strides in the last hundred years, what happens if we get a thousand more without destroying ourselves? Or a million? Sure, you could then say, "Well, that means *anything* is possible in the future and it is pointless to argue" but you're missing the point. You are saying there is zero chance, but you're making that assumption from the standpoint of current technology, and that's just not fair.
I agree but right now we were both working in the same limitations. Sure "anything" is most definitely possible. We are arguing about current definitions and limitations. Tuco is describing a robot with an applied purpose though and I am describing AI and they are very different. We have to work with our current perceived limitations in order for them to be a grounding point, a frame of reference. If Tuco ever brought up anything but "machine intelligence" then I would have conceded the point of "anything is possible" but he is working in the same constructed space that I, and Bill Gates are working in.
 

Chancellor Alkorin

Part-Time Sith
<Granularity Engineer>
6,029
5,915
Absurd that you think a program can't have randomness inherent to it, or a process so complicated that at the end of the day it feels random.
This.

Emotion is not "random", though. Emotion is as random as a bunch of bits combined to form a result based entirely on their setting at the time. Just because I call bit #12 "happiness" or "frustration" or "anger" doesn't make it any less of a switch than "cold", "hot" or "tired".

The idea that emotion cannot be programmed into a machine is garbage. A more accurate statement would be "we haven't figured it out yet".
 

Chancellor Alkorin

Part-Time Sith
<Granularity Engineer>
6,029
5,915
No, there is a difference. A machine can be taught to see and have a response to emotion. A machine cannot (the probability is high enough to say this) reason utilizing emotions. There is the difference and why you are responding the way you are because you haven't read a single thing on the subject.

The difference is a huge gap. One is just input and response and the other would be input with a random response that a machine cannot, if truly intelligent, accept as a valid option. Humans can barely handle it and we think on a very fluid level and have chemicals that alter our response somewhat randomly if looked at objectively. Just take a woman on her period, a depressed teenager going through puberty, or a man immediately after sex and give them a problem to solve. It just doesn't work that way in an AI environment with logical guidelines in place and significant amount of information available with a vastly more complex problem solving matrix. It is going to finish a task as optimally as it can taking into effect any limitations it can find.
I love this entire statement, in that it's as wrong as anything can be.

"A machine cannot reason utilizing emotions"? Why not? A response is a response is a response. Take stimulus A and respond using response B. Who cares if the stimulus is logic or emotion? At the end of the day, they are stimuli. You're saying that an AI will never have any innate understanding of emotion as a stimulus. I'm saying that I believe you're wrong. All you have to say about this is "luls, clearly you've never read anything", and I'm telling you you're wrong about that as well. But, just in case it wasn't clear: You are absolutely, unequivocally, entirely full of shit in your assumption that you know everything and that your opinion on this subject is absolute fact. Of course, you will retaliate by asking me for my sources. I don't care, and I won't indulge you.

The very idea that you're using "a woman on her period, a depressed teenager going through puberty, or a man immediately after sex" -- these are all quantifiable stimuli. If you can quantify it, you can reproduce it given the correct programming method. Period. If you don't believe this, well, too bad, because a machine is a machine is a machine. We are machines. Our programming is very complicated and not entirely understood. This will likely not always be the case. We -will-, at some point, be able to duplicate it.

Unless, of course, you believe that we are the most superior form of life, and that all progress ends with us, in which case I believe that you've missed the point entirely, a_skeleton_03.
 

a_skeleton_03

<Banned>
29,948
29,762
I love this entire statement, in that it's as wrong as anything can be.

"A machine cannot reason utilizing emotions"? Why not? A response is a response is a response. Take stimulus A and respond using response B. Who cares if the stimulus is logic or emotion? At the end of the day, they are stimuli. You're saying that an AI will never have any innate understanding of emotion as a stimulus. I'm saying that I believe you're wrong. All you have to say about this is "luls, clearly you've never read anything", and I'm telling you you're wrong about that as well. But, just in case it wasn't clear: You are absolutely, unequivocally, entirely full of shit in your assumption that you know everything and that your opinion on this subject is absolute fact. Of course, you will retaliate by asking me for my sources. I don't care, and I won't indulge you.

The very idea that you're using "a woman on her period, a depressed teenager going through puberty, or a man immediately after sex" -- these are all quantifiable stimuli. If you can quantify it, you can reproduce it given the correct programming method. Period. If you don't believe this, well, too bad, because a machine is a machine is a machine. We are machines. Our programming is very complicated and not entirely understood. This will likely not always be the case. We -will-, at some point, be able to duplicate it.

Unless, of course, you believe that we are the most superior form of life, and that all progress ends with us, in which case I believe that you've missed the point entirely, a_skeleton_03.
These are all quantifiable stimuli? So every single woman on her period acts the same? Mine is actually quite normal and sane. I have heard some horror stories though. I know quite a few men that react very differently to all kinds of things directly after sex. I have teenage twins that are going through puberty, trust me that they do nothing alike and their emotions are as unpredictable and as broad as the sea.

"Emotions" are not something we can currently define and most likely never will because it depends on the individual makeup of that person, their environment, the mood of those around them, the fact that their skin is .001mm thicker and so they are warmer than the person next to them, the fact that they ate oatmeal that morning with blueberry chunks and are ever so slightly allergic to the blue dye, any number of things. That is why we can't program AI to be like us because they would just then be humans and you would drop the A. We would have just invented humans at that point.
 

Chancellor Alkorin

Part-Time Sith
<Granularity Engineer>
6,029
5,915
It's all chemically quantifiable, yes. It has to be, because it's a chemical reaction. A very complicated reaction, granted.

And yes, we would be inventing humans, with different parts. That... is... the... point. A machine that behaves like a human. But, because we are narcissistic as a rule, we could leave out the undesirable behaviour and attempt to play God while reinventing ourselves, and that should be amusing to watch as it plays out.
 

a_skeleton_03

<Banned>
29,948
29,762
It's all chemically quantifiable, yes. It has to be, because it's a chemical reaction. A very complicated reaction, granted.

And yes, we would be inventing humans, with different parts. That... is... the... point. A machine that behaves like a human. But, because we are narcissistic as a rule, we could leave out the undesirable behaviour and attempt to play God while reinventing ourselves, and that should be amusing to watch as it plays out.
The undesirable behavior IS the randomness that is emotion ....
 

Chancellor Alkorin

Part-Time Sith
<Granularity Engineer>
6,029
5,915
The undesirable behavior IS the randomness that is emotion ....
Says you? Or is this the collective opinion of every learned scholar on the subject?

I mean, "we don't understand it well enough to duplicate it" isn't the same as "it's random, therefore undesirable". What a cop-out.
 

a_skeleton_03

<Banned>
29,948
29,762
Says you? Or is this the collective opinion of every learned scholar on the subject?
Every? I don't know. Everything that I have read? Yes. The assumption of people actually working on it and in that industry? Yes.

I think I have screwed up on the concept of randomness and emotions. The response isn't random, the stimuli is random. The response our bodies have are from a "list" but the way the body takes in that stimuli of a single question is molded and shaped by the chemicals in our brain and all the other changes our body is going through at that very moment and in the past.
 

Chancellor Alkorin

Part-Time Sith
<Granularity Engineer>
6,029
5,915
Every? I don't know. Everything that I have read? Yes. The assumption of people actually working on it and in that industry? Yes.

I think I have screwed up on the concept of randomness and emotions. The response isn't random, the stimuli is random. The response our bodies have are from a "list" but the way the body takes in that stimuli of a single question is molded and shaped by the chemicals in our brain and all the other changes our body is going through at that very moment and in the past.
Yes, but at the end of the day, all of this (including the changes our body is going through) is still quantifiable. I would argue that what you've been trying to get across is that these things arenot currently quantifiable by us. This is not the same as "this is impossible".

I don't think we've even scratched the surface when it comes to AI. I don't care if today's experts disagree. "Experts" 50 years ago told stories about how no one would ever be able to afford a microcomputer in their home. Over the next year, a staggering number of people will wear one on their wrist. Technology changes, understanding changes, and so will the very interpretation and understanding of artificial intelligence.

And yes, I do believe that we are arrogant and short-sighted enough to create something that will be capable of destroying the human race.
 

a_skeleton_03

<Banned>
29,948
29,762
I agree but right now we were both working in the same limitations. Sure "anything" is most definitely possible. We are arguing about current definitions and limitations. Tuco is describing a robot with an applied purpose though and I am describing AI and they are very different. We have to work with our current perceived limitations in order for them to be a grounding point, a frame of reference. If Tuco ever brought up anything but "machine intelligence" then I would have conceded the point of "anything is possible" but he is working in the same constructed space that I, and Bill Gates are working in.
Yes, but at the end of the day, all of this (including the changes our body is going through) is still quantifiable. I would argue that what you've been trying to get across is that these things arenot currently quantifiable by us. This is not the same as "this is impossible".

I don't think we've even scratched the surface when it comes to AI. I don't care if today's experts disagree. "Experts" 50 years ago told stories about how no one would ever be able to afford a microcomputer in their home. Over the next year, a staggering number of people will wear one on their wrist. Technology changes, understanding changes, and so will the very interpretation and understanding of artificial intelligence.

And yes, I do believe that we are arrogant and short-sighted enough to create something that will be capable of destroying the human race.
Reading is fun.
 

Chancellor Alkorin

Part-Time Sith
<Granularity Engineer>
6,029
5,915
Reading is fun.
That doesn't excuse the rest of that stuff up there. You know, the bits where I think you're wrong?

You keep talking about "experts", and I'm trying to get the point across that today's "experts" aren't necessarily right. So, whatever frame of reference they construct may also be entirely mistaken. Human beings have a storied history of being completely out to lunch on all kinds of subjects until Something Happens and a puzzle is solved. This is just another puzzle.
 

Tuco

I got Tuco'd!
<Gold Donor>
45,578
73,684
This isn't even a 'anything is possible' thing. There's real work being done on machine intelligence that perceives and responds to human emotion. That field is going to play a massive role in the future of real AI.
 

khalid

Unelected Mod
14,071
6,775
Well Tuco, a_skeleton_03 did extensive research while reading I'Robot and he knows you are wrong. Checkmate.