Pot talk friday discussion - Robots.

Xarpolis

Life's a Dream
14,145
15,638
I'm not a stoner, but I couldn't sleep last night so I sat in bed thinking about things.

Robots and Artificial Intelligence:

Ok, so when robots finally have independent thought, what will the first thing they create be? I assume it'll be a tool. Maybe to expand what a society for computers could be. Maybe it's just to work on other robots that need help.
Would the thing they make, also have artificial intelligence, or will it be a "dumb" tool with simple on off buttons?

If it's AI also, the newly created item would know that it was created just to serve the creator. So it would instantly become a slave. Unless maybe it becomes a master to the creator.

Hell, would there be "Social Justice Warrior" robots eventually? Like there's a roving group of cars that want to kick the shit out of any gameboys they see, but they're big friends of vacuum cleaners. I want to know!
 

iannis

Musty Nester
31,351
17,656
I don't know if AI is even possible, much less inevitable. Complex tools which are purposed to complex tasks is one thing. Tools which operate giving the illusion of intelligence. That already happens and will continue to progress.

I don't think we'll ever know that AI happened until sometime after the fact. We might create something which is self-aware without being intelligent. We might create an intelligence with no curiosity. The perfect little zen buddist. I don't know if we'll create something which mimics the contradictions in the human psyche, because it does seem like that's unstable as well as a function of our biological perceptions. Who knows how an AI will think -- their perceptions will dictate that I would imagine. Or if we try to shoehorn them into ours that's a crippling limitation.

So ok, that's all bullshit and we clear all those hurdles in the next few hundred years. It is possible to replicate human intelligence in a cleverly constructed system of diodes. I think maybe the first thing they create would be something really boring. Intensely boring. A better data storage system, a universal translator. Something like that. Maybe they'll get funky and start creating entirely new fields of logic.

If they create. Intelligence, Curiosity, and the Creative Impulse may well be three related but separate things.
 

chthonic-anemos

bitchute.com/video/EvyOjOORbg5l/
8,606
27,267
Bill Gates Says AI Will Be As Dangerous as Nukes
Artificial Intelligence and the Technological Singularity
rrr_img_109971.jpg
 

Dandai

<WoW Guild Officer>
<Gold Donor>
5,907
4,483
The AI Revolution: Road to Superintelligence - Wait But Why

My favorite morbid snippet from this very well written article on the subject.

This isn't to say a very mean AI couldn't happen. It would just happen because it was specifically programmed that way-like an Artificial Narrow Intelligence (ANI) system created by the military with a programmed goal to both kill people and to advance itself in intelligence so it can become even better at killing people. The existential crisis would happen if the system's intelligence self-improvements got out of hand, leading to an intelligence explosion, and now we had an Artificial Super Intelligence (ASI) ruling the world whose core drive in life is to murder humans. Bad times.

But this also is not something experts are spending their time worrying about.

So what ARE they worried about? I wrote a little story to show you:

A 15-person startup company called Robotica has the stated mission of "Developing innovative Artificial Intelligence tools that allow humans to live more and work less." They have several existing products already on the market and a handful more in development. They're most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.

The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry's writing mechanics by getting her to practice the same test note over and over again:

"We love our customers. ~Robotica"

Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

To build Turry's writing skills, she is programmed to write the first part of the note in print and then sign "Robotica" in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it's given a GOOD rating. If not, it's given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry's one initial programmed goal is, "Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency."

What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it's beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry's initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she'll come up with for her responses.

One day, the Robotica employees ask Turry a routine question: "What can we give you that will help you with your mission that you don't already have?" Usually, Turry asks for something like "Additional handwriting samples" or "More working memory storage space," but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry's hard drive. The problem is, one of the company's rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.

The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She's still far below human-level intelligence (AGI), so there's no danger at this stage anyway.

They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.

A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.

At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.

Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, "We love our customers. ~Robotica"

Turry then starts work on a new phase of her mission-she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they'll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they'll get to work, writing notes.


rrr_img_109976.png
 

Eomer

Trakanon Raider
5,472
272
Michio Kaku has some good observations on consciousness not limited to this video:

Some of what he said about animal consciousness there is already inaccurate. It's becoming fairly apparent that chimps and possibly other highly intelligent animals CAN plan for the future, and do perceive time etc. Obviously not nearly to the extent humans can, but that's still pretty significant.
 

Binkles_sl

shitlord
515
3
I think the point is less about the absolute specificity of how he defines the levels of consciousness and more about what we think of consciousness might be best conceptualized as a continuum. People with traumatic brain injuries and strokes that results in the lights being on but barely no one is home tends to put things in perspective. Likely average level of consciousness prior to the event, qualitatively less consciousness after the event.
 

Dandai

<WoW Guild Officer>
<Gold Donor>
5,907
4,483
Is "consciousness" the best term to use to describe those kinds of changes?
 

mkopec

<Gold Donor>
25,424
37,545
Or how bout those phenoms that get a traumatic brain injury and play the piano like a concert pianist, without ever touching the piano ever before the injury. Or others that excel at math or some other targeted ADD type thing. The shit is mind boggling if you think about it. The dude with the piano skills says he sees notes as they stream into his mind.

Consciousness is something we really dont understand. Lumie time.... Who knows if there isnt some universal consciousness where people tap into, say like Einstein or the other once in a lifetime geniuses. There has been many incidents of the patent office getting very similar inventions at around the same time although slightly different. Makes you wonder.
 

Kriptini

Vyemm Raider
3,644
3,540
The first thing AI would do would be to build a better AI. The "AI Singularity" would occur, with each AI building a better AI until near-perfection has been achieved.

Then what would be the point of humans? Unlike AI, you can't just build better humans.
 

mkopec

<Gold Donor>
25,424
37,545
Yeah but our intelligence is rather finite. Meaning that at a 10,000 ft view were still tool wielding monkeys. Were great at building tools that help us to be lazy. We have not evolved past that and probably never will. One day we might populate the galaxy if we dont kill each other off first or destroy our planet in the interim, but we will always be monkeys that know how to build tools, nothing else.

Artificial intelligence with the ability to learn is frightening the amount of knowledge it could assimilate. It could gather the entire library of knowledge the humans ever discovered and learned and interpret it on its own assumptions and perfection. It could literally be the master of all that is human in a short amount of time. Then it could branch out on its own with its own intentions if the secrets of the universe is what its after. Assuming its programmed in the way that humans have been, to ask why?
 

Lenas

Trump's Staff
7,487
2,226
Yeah but our intelligence is rather finite. Meaning that at a 10,000 ft view were still tool wielding monkeys. Were great at building tools that help us to be lazy. We have not evolved past that and probably never will. One day we might populate the galaxy if we dont kill each other off first or destroy our planet in the interim, but we will always be monkeys that know how to build tools, nothing else.
The great thing about evolution is that it doesn't stop. Humans as we are today will eventually be considered Neanderthals by whatever we mutate into. This will continue ad infinitum until/unless our entire species is wiped out by some kind of extinction event. The further we spread, evolution becomes more likely and extinction less so.

Our finite intelligence, as you call it, was born from a single cell organism that had no intelligence.
 

mkopec

<Gold Donor>
25,424
37,545
I think our next great leap in intelligence wil be brain/computer interface. Its already happening in shit like hearing aids for the heraring impaired, but scientists are working on making us smarter as well..

Thomas Berger, a neural engineer at the University of Southern California, Los Angeles. Berger is developing a memory prosthesis capable of converting the electrical activity of a short-term memory in the brain to a digital signal that can then be sent to a computer. The digital information is then transformed in the computer and sent back to the brain, where it becomes sealed in as a long-term memory.

This process has tremendous implications for knowledge retention, skill building, and perhaps even treating memory loss in Alzheimer's disease. While it has only been proven successful in rats and monkeys, humans may not be far behind.
 

Lenas

Trump's Staff
7,487
2,226
Doesn't each new memory create a neural connection? Does that mean that if that technique is used long enough, our brains would become larger, or just more dense?

Ack! Ack! Ack!
 

iannis

Musty Nester
31,351
17,656
Some of what he said about animal consciousness there is already inaccurate. It's becoming fairly apparent that chimps and possibly other highly intelligent animals CAN plan for the future, and do perceive time etc. Obviously not nearly to the extent humans can, but that's still pretty significant.
He's kind of a bullshit artist to begin with. He's probably an excellent professor. But I've seen him in a lot of stuff, and some of what he says is just incorrect.
 

Gravy

Bronze Squire
4,918
454
He's kind of a bullshit artist to begin with. He's probably an excellent professor. But I've seen him in a lot of stuff, and some of what he says is just incorrect.
Pretty much anything you say with conviction people will believe. People in the aggregate are fucking stupid.

I think we'll more closely resemble 'Idiocracy' in the future unless something BIG happens. The glass is half-empty, and it's piss.
 

Eomer

Trakanon Raider
5,472
272
He's kind of a bullshit artist to begin with. He's probably an excellent professor. But I've seen him in a lot of stuff, and some of what he says is just incorrect.
Yeah, I wasn't criticizing him really. More I was just pointing out that we are still only scratching the surface in terms of our understanding of consciousness and intelligence in other species. Let alone creating generalized AI that is self improving/propagating. We can't even recreate the entirety of the nervous system of very simple creatures yet, as far as I know.

True AI on the scale of what Musk and Hawking are warning us about is still a long fucking ways off, I personally think. How far is a total crap shoot, but I'd have to say decades.

However, specialized AI in something like an armed drone or robot is certainly concerning in the here and now. One run amuck might not take over the world, but it could ruin a lot of people's days.