Bill Gates Says AI Will Be As Dangerous as Nukes

ZyyzYzzy

RIP USA
<Banned>
25,295
48,789
Wouldn't intelligence include developing emotions (or if you would, reactions to its environment) and responses to stimuli in regards to its own well being and existence? And more so, creating self-replicating "cellular" components that are capable of adapting and disseminating information into next iterations?
 

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,964
Well the current machine learning methods involve machines applying probabalistic weights to actions that are observed. If the trend continues, and since the only pool of information the machines will learn from will be human history and human action its not incorrect to assume that machine will behave like we do, just smarter. Basically we are fucked.
 

mkopec

<Gold Donor>
25,399
37,481
Ive read somewhere that AI is improbable for the simple fact that mores law will cease to function at around the 5nm size. Then you are entering the realm of quantum tunneling asshatery. Now the human mind is said to be estimated at about 20 petaFLOPS. A top end desktop Mac Pro runs at 91 gigaFLOPS. To get to 20 petaFLOPS from 91 gigaFLOPS requires about 17.74 doublings. Computer speed doubles about every 18 months. 17.74 doublings x 1.5 years = 26.6 years. That is if mores law holds true past the 5nm transistor size which some say even 10nm is dubious using todays tech. Also the cost and the shitty yields makes it not even being worth it.

So the best AI using todays technology scaled down as small as it could go we will probably see is something that of a cat or a dog.
 

Desidero

N00b
163
2
Ive read somewhere that AI is improbable for the simple fact that mores law will cease to function at around the 5nm size. Then you are entering the realm of quantum tunneling asshatery. Now the human mind is said to be estimated at about 20 petaFLOPS. A top end desktop Mac Pro runs at 91 gigaFLOPS. To get to 20 petaFLOPS from 91 gigaFLOPS requires about 17.74 doublings. Computer speed doubles about every 18 months. 17.74 doublings x 1.5 years = 26.6 years. That is if mores law holds true past the 5nm transistor size which some say even 10nm is dubious using todays tech. Also the cost and the shitty yields makes it not even being worth it.

So the best AI using todays technology scaled down as small as it could go we will probably see is something that of a cat or a dog.
Nobody said the first real AI would run on a desktop. If it happens, it'll probably consume the resources of at least one datacenter if not more. There is a supercomputer that maxes out at ~33 petaFLOPS already.
 

Tuco

I got Tuco'd!
<Gold Donor>
45,431
73,493
AI doesn't care about anything, not even its own existence.
Who says? I'm not a biologist or psychologist, but isn't care just cognition wrapped around a basic chemical reward/punishment system? If we develop an AI with cognition, there's no reason we can't attach it to a similar reward/punishment system.
 

Voyce

Shit Lord Supreme
<Donor>
7,119
23,268
Wouldn't intelligence include developing emotions (or if you would, reactions to its environment) and responses to stimuli in regards to its own well being and existence? And more so, creating self-replicating "cellular" components that are capable of adapting and disseminating information into next iterations?
Yes!



Well the current machine learning methods involve machines applying probabalistic weights to actions that are observed. If the trend continues, and since the only pool of information the machines will learn from will be human history and human action its not incorrect to assume that machine will behave like we do, just smarter. Basically we are fucked.
Yes again!



Human intelligence doesn't exist by itself, a human couldn't even possess a sophisticated intelligence if it were born in the absence of any other life form to learn from. There's no obvious way that we as humans could produce a machine intelligence, thatisn'ta reflection on the evolution of our own intelligence, thereby inheriting similar dispositions. Furthering along the aforementioned, that intelligent form would likely seek to multiply given the absence of a being near-exactly like itself, and in multiplication is where there could be conflict over scarcity of resources, theoretically (It doesn't mean there will be conflict, there are several millions scenarios, based on exponentially more variables). As far a time frame, well I'm not a Sooth Sayer, but maybe a ballpark of around 25 years for our technologies to reach needed levels and another 25 years for full development, and an sufficiently exhaustive reverse engineering of the human brain, but humanity could always find away to wipe itself off the face of the Earth,bewiped off the face of the earth, or descend back into the Stone Age at any given time before or after then.

I do not predict Machine Intelligence will come out of any of the clustered code bases that exist Today, I strictly see it as a proof of concept, but it seems very probable.
 

Valishar

Molten Core Raider
766
424
Tuco_sl said:
Who says? I'm not a biologist or psychologist, but isn't care just cognition wrapped around a basic chemical reward/punishment system? If we develop an AI with cognition, there's no reason we can't attach it to a similar reward/punishment system.
Well that was kind of my point, you have to attach it or have it develop in a way where it learns under rewards and punishments. It doesn't spontaneously come with with these things unless it's essentially wired to do so. Like those little robots that follow light around.
 

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,964
Yes!





Yes again!



Human intelligence doesn't exist by itself, a human couldn't even possess a sophisticated intelligence if it were born in the absence of any other life form to learn from. There's no obvious way that we as humans could produce a machine intelligence, thatisn'ta reflection on the evolution of our own intelligence, thereby inheriting similar dispositions. Furthering along the aforementioned, that intelligent form would likely seek to multiply given the absence of a being near-exactly like itself, and in multiplication is where there could be conflict over scarcity of resources, theoretically (It doesn't mean there will be conflict, there are several millions scenarios, based on exponentially more variables). As far a time frame, well I'm not a Sooth Sayer, but maybe a ballpark of around 25 years for our technologies to reach needed levels and another 25 years for full development, and an sufficiently exhaustive reverse engineering of the human brain, but humanity could always find away to wipe itself off the face of the Earth,bewiped off the face of the earth, or descend back into the Stone Age at any given time before or after then.

I do not predict Machine Intelligence will come out of any of the clustered code bases that exist Today, I strictly see it as a proof of concept, but it seems very probable.
Of course the fucking thing can just derive the number 42 and tell us to fuck off. Who knows.
 

Voyce

Shit Lord Supreme
<Donor>
7,119
23,268
Of course the fucking thing can just derive the number 42 and tell us to fuck off. Who knows.
Exactly, there's no way we can have any definitive understanding of our theoretical Machine Intelligence's motivations at this time, well at least I can't specifically see any.

If we modeled it after the brain of a specific person, and that person happened to have very strong impulse control, the being we create could be very pragmatic, and not wish to initiate conflict in lieu of alternative avenues of achieving its desired result, presumably some type of proliferation.


Anyway I see direct brain to brain, telepathic communication as coming into existence first--which might drastically improve our cognitive abilities.
 

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,964
Exactly, there's no way we can have any definitive understanding of our theoretical Machine Intelligence's motivations at this time, well at least I can't specifically see any.

If we modeled it after the brain of a specific person, and that person happened to have very strong impulse control, the being we create could be very pragmatic, and not wish to initiate conflict in lieu of alternative avenues of achieving its desired result, presumably some type of proliferation.


Anyway I see direct brain to brain, telepathic communication as coming into existence first--which might drastically improve our cognitive abilities.
I believe both strong AI and telepathic communications will take hundreds or maybe thousands years to develop. We know so little about how our brains work that its silly to assume we will be able to model them in any meaningful way anytime soon.
 

Voyce

Shit Lord Supreme
<Donor>
7,119
23,268
I believe both strong AI and telepathic communications will take hundreds or maybe thousands years to develop. We know so little about how our brains work that its silly to assume we will be able to model them in any meaningful way anytime soon.
"we know so little" - debatable

"We know so little about how our brains" - very debatable

"We know so little about how our brains work that its silly to assume we will be able to model them" -completely disprovable

We already have crude models and mappings of our brain, certainly to primitive for the return that we're discussing, but with our current technologies and understanding, our advances are happening closer to margins of exponents compared to an assertions of "thousands" of years. I can't give any definite numbers, I can't see the future, but I also can't see the rationality for thousands of years.
 

Desidero

N00b
163
2
I believe both strong AI and telepathic communications will take hundreds or maybe thousands years to develop. We know so little about how our brains work that its silly to assume we will be able to model them in any meaningful way anytime soon.
We've made extraordinary advances in the last ~100 years. We went from horses, candles and "natural remedies" to spaceships, electric-powered everything and vaccines for some of the world's worst diseases. 15 years ago the human genome project was a herculean task - now anyone can pay $1000 and replicate the decades-long program in less than a day.

I'm not saying that we'll have "real" AI in 50 years, but it wouldn't surprise me if we did have something approaching it.
 

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,964
"we know so little" - debatable

"We know so little about how our brains" - very debatable

"We know so little about how our brains work that its silly to assume we will be able to model them" -completely disprovable

We already have crude models and mappings of our brain, certainly to primitive for the return that we're discussing, but with our current technologies and understanding, our advances are happening closer to margins of exponents compared to an assertions of "thousands" of years. I can't give any definite numbers, I can't see the future, but I also can't see the rationality for thousands of years.
We are mapping the brain and know what part does what but we do not know how the brain puts it all together. We know that the brain works by association. How it adds two things together to form a third thing we have no clue. We are nowhere near anything that works like our brain works. Neural networks is an attempt do mimic the brains behavior but, while the approach is super popular in the last 4-5 years, it pales in comparison to anything that a fully functional brain can come up with.

We've made extraordinary advances in the last ~100 years. We went from horses, candles and "natural remedies" to spaceships, electric-powered everything and vaccines for some of the world's worst diseases. 15 years ago the human genome project was a herculean task - now anyone can pay $1000 and replicate the decades-long program in less than a day.

I'm not saying that we'll have "real" AI in 50 years, but it wouldn't surprise me if we did have something approaching it.
All these advances happened so fast because we started at a low point of knowledge so there were a lot of things to discover. Now that all the "easy" discoveries have been made, things will progress a lot slower. You mentioned vaccines. How many new vaccines have we come up with in the last few decades? Not many.
 

Desidero

N00b
163
2
All these advances happened so fast because we started at a low point of knowledge so there were a lot of things to discover. Now that all the "easy" discoveries have been made, things will progress a lot slower. You mentioned vaccines. How many new vaccines have we come up with in the last few decades? Not many.
How many polio/measles/chicken pox type diseases does the average American encounter these days? The main thing that we deal with is the flu, which mutates like crazy and we still do a fair job of reducing it.

I don't expect to see dramatic changes like we did in the last 100 years, but human knowledge is growing dramatically now that we're specializing more and more.
 

Asshat wormie

2023 Asshat Award Winner
<Gold Donor>
16,820
30,964
How many polio/measles/chicken pox type diseases does the average American encounter these days? The main thing that we deal with is the flu, which mutates like crazy and we still do a fair job of reducing it.

I don't expect to see dramatic changes like we did in the last 100 years, but human knowledge is growing dramatically now that we're specializing more and more.
Sure the knowledge is growing. I never said it didnt. I just didnt think your comparison was a goo one.