PDA

View Full Version : Artificial Super Intelligence - are we ready for it?



Odelay
03-24-2015, 04:52 AM
Many say that we won't experience the worst of the effects associated with global warming for another 50 or even 100 years.

But what if we face a crisis, that very few people seem to know about today, that will manifest itself almost overnight in a matter of only 20-30 years?

Welcome to the world of true artificial intelligence.

The two parts associated with this linked piece are really, really long, but in my opinion well worth it. The author puts all of this in layman's terms so it's long, but a relatively easy read.

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

I'm also throwing in a long read on Fermi's paradox, which relates to the question of why we haven't heard from any extra-terrestrial species. Fermi's paradox also involves AI because even an ET species that isn't interested in galaxial travel, could easily send AI voyageurs to spy on or greet others.

http://waitbutwhy.com/2014/05/fermi-paradox.html


My opinion? This stuff concerns me. Humankind seems best at reacting and countering crises that we can see coming. Things like former ice ages, imperialism and industrialization, we could see coming gradually and adjust our lives and behavioral patterns. That's why although I'm concerned with global warming, I do believe that we'll have some interesting answers to it, including big geo-engineering responses. We'll adapt. Also, it might not be all bad. Just as the last Ice Age must have been devastating, humankind also seemed to use it as an advantage by crossing over the ice bridge within the Bering Sea. I'm guessing there will be some unforeseen benefits to global warming.

Crises that come upon us all at once, however, I believe are a much bigger issues for humankind. So far, we dodged a huge bullet with the invention of atomic weaponry. But I think there was a lot of luck at play.

Will we be so lucky when Artificial Super Intelligence arrives? As the author of the linked pieces asserts, I'm not too sure.

Stavros
03-24-2015, 04:10 PM
Many say that we won't experience the worst of the effects associated with global warming for another 50 or even 100 years.

But what if we face a crisis, that very few people seem to know about today, that will manifest itself almost overnight in a matter of only 20-30 years?

Welcome to the world of true artificial intelligence.
My opinion? This stuff concerns me. Humankind seems best at reacting and countering crises that we can see coming. Things like former ice ages, imperialism and industrialization, we could see coming gradually and adjust our lives and behavioral patterns. That's why although I'm concerned with global warming, I do believe that we'll have some interesting answers to it, including big geo-engineering responses. We'll adapt. Also, it might not be all bad. Just as the last Ice Age must have been devastating, humankind also seemed to use it as an advantage by crossing over the ice bridge within the Bering Sea. I'm guessing there will be some unfor
eseen benefits to global warming.

Crises that come upon us all at once, however, I believe are a much bigger issues for humankind. So far, we dodged a huge bullet with the invention of atomic weaponry. But I think there was a lot of luck at play.

Will we be so lucky when Artificial Super Intelligence arrives? As the author of the linked pieces asserts, I'm not too sure.

Odelay I was going to thank you for these links, but when I began to read the first one, I became irritated at simple errors in a rather freewheeling discussion. For example, at the very start he writes;
Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay.
I can't take this seriously, not even as a light-hearted comment, given the huge importance of the sea in transportation, long before 1750 and including internal waterways, seas and oceans- even if in the case of internal waterways canal boats were at one time drawn by horses. A rather more eloquent and dare I say more informed historical framework takes the world in 1400 and is in Eric Wolf's Europe and the People Without History (1982) and the first volume of Quentin Skinner's The Foundations of Modern Political Thought(197-eight) which takes it cue from the emerging city states of Italy in the 12th century as a point of departure for a discussion for modern ideas in politics.

May I refer you to the Centre for the Study of Existential Risk in Cambridge (UK) where you will find a good set of discussions on AI and associated issues, as well as links to the Future of Humanity Institute in Oxford and some US institutions.
http://cser.org/about/our-mission/

This is a vast topic, whether it is about the impact alogrithms have had on our daily life, or on the broader issue of humans being replaced by robots with feelings...

trish
03-24-2015, 07:27 PM
Thanks, Odelay, for the articles. I perused the first one over my morning cappuccino at the coffee shop. It was a long one with many points. And thank you Stavros for the link.

I think it may be possible to make a distinction between intelligence, knowledge and processing power (speed and memory), although the three bleed together and support one another. We have been quite successful in building machines with stupendous speed and considerable memory, but not (I think) so successful at building intelligent machines or knowledgable ones (if knowledge is more than just data storage).

Take chess playing programs for example. At first AI researchers attempted to reduce the heuristics, strategies and insights that chess masters have gained through study and experience into computer code. But this sort of attack never led to much success. But today there are algorithms now that consistently outperform masters of the game. What changed? The speed and memory of machines changed. AI researchers realized they win chess games by brute force rather than cunning and intelligence. For the most part modern chess playing algorithms simply run through the tree of all possible moves of the game for more moves ahead than any human could manage.

An analogy: Suppose one would like to solve the cubic equation 2 x^3 - 55 x^2+13 x + 378 = 0. Two methods come to mind. Use a change of variables to transform it into a more manageable form. Factor and backtrack to the roots. If you can get it to work this will provide you with a general procedure for solving all cubic equations. This is what Niccolo Tartaglia did back in the sixteenth century (there’s an interesting history here). The second method is to just start picking numbers 0,1,-1,2,-2,3,-3 etc. and plugging them into the formula so see if they work. A computer can do this super-fast and will come up the answer 27 in the time it takes you to lift your eyes from the enter button to the computer screen. The advantage of Tartaglia’s approach is now he has a general expression for the roots of a cubic which will find you those roots even when they’re irrational. The second approach will only find integer roots and it gives you know way of expressing them generally. The first approach is clever, The second is brute force. Of course the programmers could just code Tartaglia’s formula into a algorithm and then the computer will beat Tartaglia every time. But the point here is: there is no general formula (yet) for playing a good game of chess. Humans stand a better chance using the heuristic strategies they’ve gain through experience. Machines stand a better chance using brute force searches.

Another example is language translation. Several decades ago it was predicted that computers would be translating languages with nuance and ease. This isn’t so. Again, the best algorithms simply utilize brute force. Google translate searches a huge data base of previously translated text for words and phrases that appear in the text to be translated and simply replaces those phrases with the found counterparts in the target language. No grammar. No computerized Chomsky language modules. Just brute force searches. It translates languages better than a chimp can, but it’s not as intelligent as a chimp.

It’s not that I don’t believe machines can think. We’re machines, and we think. But I don’t think (I’m no expert mind you) there are any artificial general intelligence algorithms in existence that come close to what a chimp does.

Nor am I oblivious to the threat of artificial intelligence. In the wrong hands immense populations and be tracked, surveilled and controlled.

The social and economic repercussions of sophisticated software are and will continue to be enormous. Because brute force calculation can simulate intelligent behavior, there is a temptation to replace intelligent agents with algorithms. They replace workers on the factory floor. They sell airline tickets and even fly the airliners. They diagnose disease. They tell us if the DNA of the defendant matches the blood and semen found at the crime scene. They break these very words into bits an route them in different directions around the globe and reconstruct them on your screen. Power doesn’t have to be intelligent to be dangerous.

The article speculates that the machines of the future will be super-intelligent. I question the concept:

I don’t know, but I think it’s possible that intelligence is not hierarchical. Like a traffic light. It can turn green and it’ll be legal to proceed, but it can’t turn more green and provide more assurance of your right to proceed.
Another analogy can be found in the theory of computation. There are a lot of different kinds of automata, but there are none more “powerful” than a Universal Turing Machine (UTM). There is no computational task, in principle, that cannot be done with a UTM. The hardware in which it is realized might be slow or fast. It might be electronic or organic. But computationally speaker it is the top of the mountain. There is nothing higher.
Intelligence may be like that. There may be intelligent agents who think faster than us and have more memory than us. There may be intelligent agents (at least in principle) who thick on geologic time scales. But perhaps there’s no intelligent way to make sense of the question, “Which of us is more intelligent?”

Suppose, however, super-intelligence IS a possibility and in a few short decades there will super-intelligent AI’s will be inhabiting the cloud. I wonder if we’ll recognize them. What sort of culture would they have? How sophisticated would their languages be? Would be recognize their discourse as discourse, or would it just seem like a scatter of random bits? What sort of arts would engage them? Does a chimp see the image of a woman when it examines the Mona Lisa? Should we gaze at a swirl of changing characters on our screens would we recognize it as a super-intelligent agent’s expression of impermanent beauty and balance? Would we even get that super-intelligent art is art? Or is this just a reductio absurdum leading us back to the conclusion that we are ultimately the same?

Sorry for the long rambling response.

Stavros
03-25-2015, 12:26 PM
The social and economic repercussions of sophisticated software are and will continue to be enormous. Because brute force calculation can simulate intelligent behavior, there is a temptation to replace intelligent agents with algorithms. They replace workers on the factory floor. They sell airline tickets and even fly the airliners.


By coincidence the BBC a few nights ago showed a documentary on the 2009 crash of Air France 447 which left Rio de Janeiro and crashed into the Atlantic about 3 hours after take-off. It was an extraordinary example of the confusion that took place in the cock-pit when the computers shut down because they could not make sense of the pilot error which resulted in the plane going nose-up into a storm and stalling. But the co-pilot had manual control of the plane because the computers shut down. It raised the question, would this plane have crashed if it had been left solely to the computer? The initial problem was that the storm outside freezed up the pitot tubes which give speeds readings which could not then be read or understood, but a drop in altitude would have melted the ice and re-booted that info and at 38,000 feet that is enough to re-order the plane's trajectory, but the co-pilot kept his hand on the manual control that was sending the plane ever upward until it just could go no further and dropped like a stone in less than four minutes.

By contrast, I have more than once used an automated supermarket checkout and without doing anything wrong have been unable to proceed because of an unexpected item in bagging area...something tells me we have a long way to go before reconciling the wonders of technology with the not always wonderful people who design it.

martin48
03-25-2015, 05:58 PM
I'll look at just one of your ramblings - the Universal Turing Machine. A UTM can compute anything that is computable! And no more. Turing showed himself that some seemingly simple tasks such as predicting when a process will end - the halting problem - can not be solved. This may appear rather abstract but actually it is rather important to us as humans to have an understanding when something may stop or change.

AI works in two ways - brute force (OK for playing chess but for not getting through life in general) or by optimising some function (we call it a cost function). Two ways of optimisation - go down hills (reduce some error between what you observe and what you desire) or random jumps in the dark (evolutionary computing). A fully extensive search could need infinite resources and infinite time. Really the idea behind NP-complete problems (like the Travelling Salesman). So brute force and brute "logic" don't always work - so you have to accept answers that are "good enough". There is the rub.

On a more practical side - we may be able to sequence DNA at speeds that seem incredible or believe that we can recognise faces better than humans can, but we don't have an AI computer that can identify a chair!

We still have little idea how our brains convert signals into symbols.







Another analogy can be found in the theory of computation. There are a lot of different kinds of automata, but there are none more “powerful” than a Universal Turing Machine (UTM). There is no computational task, in principle, that cannot be done with a UTM. The hardware in which it is realized might be slow or fast. It might be electronic or organic. But computationally speaker it is the top of the mountain. There is nothing higher.

trish
03-25-2015, 07:38 PM
I'll look at just one of your ramblings - the Universal Turing Machine. A UTM can compute anything that is computable! And no more..
I didn’t mean to imply anything different. What one automaton can compute, so can a UTM. A mathematician often speaks in terms of partial functions (from finite ordinals to finite ordinals). An computing machine is said to compute a partial function f if you can program it to output f(n) whenever it is given an input n from the domain of f (and will not output anything if n isn’t in the domain of f). Before the discovery of the UTM one might have thought that given any computer, there would always be another one that computes more functions. A computer designer might have aspired to build machines which could compute more and more functions. But the UTM is the best one can do in that regard. Once you build a machine equivalent to a UTM you’ve crossed the threshold where this specific aspiration come to an end. No other machine can be “smarter” than the one you just built. Now it’s just a matter of giving it knowledge (programming it), giving it more readily available memory and making it faster. My speculation is that “intelligence” may be analogous to this. Humans may have reached the stage where no other being can be “smarter.” Other agents might be more knowledgeable, might have more ready memory and be faster on their feet; but in some basic sense we may be as “intelligent” as it gets. There’s a depressing thought for you. Good news: there’s no reason to believe this, it’s just a ill expressed possibility.


AI works in two ways - brute force (OK for playing chess but for not getting through life in general) or by optimising some function (we call it a cost function). Two ways of optimisation - go down hills (reduce some error between what you observe and what you desire) or random jumps in the dark (evolutionary computing). A fully extensive search could need infinite resources and infinite time. Really the idea behind NP-complete problems (like the Travelling Salesman). So brute force and brute "logic" don't always work - so you have to accept answers that are "good enough". There is the rub.

True. If one needed the roots of a fifth degree polynomial, brute force search would almost surely fail (unless by dumb luck at least one of them in the search range). But of course there are numerical methods for approximating those roots to within any desired degree of accuracy. Machines are still way better than us at this because of their brute speed. So smart programming, brute speed, lots of memory and a willingness to “accept answers that are ‘good enough’” are important ingredients for good computational problem solving.


On a more practical side - we may be able to sequence DNA at speeds that seem incredible or believe that we can recognise faces better than humans can, but we don't have an AI computer that can identify a chair!

We still have little idea how our brains convert signals into symbols.
.
Totally agree.

Moreover, although they are sometimes modeled as neural nets, brains are not digital machines. They are hybrid digital/analog. I’m not entirely convinced that we can’t–in principle–do more than a UTM (one can mathematically construct–in the abstract–hybrid machines that realize non-computable functions).

Odelay
03-26-2015, 03:31 PM
Odelay I was going to thank you for these links, but when I began to read the first one, I became irritated at simple errors in a rather freewheeling discussion. For example, at the very start he writes;
Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay.
I can't take this seriously, not even as a light-hearted comment, given the huge importance of the sea in transportation, long before 1750 and including internal waterways, seas and oceans- even if in the case of internal waterways canal boats were at one time drawn by horses. A rather more eloquent and dare I say more informed historical framework takes the world in 1400 and is in Eric Wolf's Europe and the People Without History (1982) and the first volume of Quentin Skinner's The Foundations of Modern Political Thought(197-eight) which takes it cue from the emerging city states of Italy in the 12th century as a point of departure for a discussion for modern ideas in politics.

May I refer you to the Centre for the Study of Existential Risk in Cambridge (UK) where you will find a good set of discussions on AI and associated issues, as well as links to the Future of Humanity Institute in Oxford and some US institutions.
http://cser.org/about/our-mission/

This is a vast topic, whether it is about the impact alogrithms have had on our daily life, or on the broader issue of humans being replaced by robots with feelings...

Understood about not reading further. The author is clearly shooting for a broad audience that isn’t all that well informed on the latest thinking on AI, and he sort of caught me in that net.

Thanks for the link. To be fair to the author, one of the experts he sites extensively is Nick Bostrom out of Oxford, who happens to be the second named AI thought leader in the AI part of the site you linked to. To be sure, an article that focuses on just Bostrom’s views on AI would give a lay person plenty to think about.

Stavros, you are absolutely correct about the impact of algorithms on our current and future daily lives. Even if AGI isn’t achieved for 300 years, just the development of more and more sophisticated ANI (artificial narrow intelligence) has huge, serious repercussions for the 8 billion people currently residing on the planet. I work in Information Technology and I can foresee even System Administrators being replaced by ANI as troubleshooting badly performing systems will be conducive to large scale trial and error efforts by ANI. Technology replacing technology workers.

I’m not too optimistic about what the capital barons will do with yet another financial windfall when they replace cheap labor with no labor.

Stavros
03-28-2015, 10:29 AM
Odelay, perhaps the key question in the short to medium term is: would you fly from LA to San Francisco in an aeroplane flown by a computer rather than a human being?

Odelay
03-28-2015, 02:28 PM
Odelay, perhaps the key question in the short to medium term is: would you fly from LA to San Francisco in an aeroplane flown by a computer rather than a human being?
I'm not a good one to be asked this question as my father worked within the airline industry. As a result, I would say yes. Way back when, I had a time or two within the cockpits of commercial airliners and watched as the pilot would engage the autopilot of aircraft of the late 60's.

I am not an aeronautical engineer, but I can only imagine the improvements in this technology in the intervening 45+ years. With the dramatic decrease of airline accidents since that time, it's hard to argue that autopilot technologies didn't play a role in the increase in flight safety.

And now, after this most recent accident, people are going to seriously ponder whether a computer should be able to take control away from a human. By the way, I liked your question in an earlier post about whether a computer might have figured out the best course of action in a 2009 crash had it been given a chance.

trish
03-28-2015, 04:56 PM
...would you fly from LA to San Francisco in an aeroplane flown by a computer rather than a human being?
Depends on the computer's state of mind. Has it been drinking? Is it depressed? Is it preoccupied with it's deteriorating marriage? Did it get enough sleep since the last flight? Perhaps most importantly, does it nurse a growing hatred of human beings? Especially the one's who whine about the discomforts of air travel?

broncofan
03-28-2015, 09:03 PM
There would probably have to be systems in place so that a human could take control from a computer that is not functioning properly? How do we ensure the safety of that, if we can't ever trust people? What about hackers? Until a computer reaches a threshold of consciousness and refined learning ability, isn't there always someone who could tinker with the program or the operation of it?

When it reaches a state of consciousness and does not need further programming, why would it then be immune to the bad decisions of other conscious, autonomous beings?

I would trust a computer to fly, but I would't mind there being a trustworthy human backup.

Stavros
03-29-2015, 12:53 AM
I'm not a good one to be asked this question as my father worked within the airline industry. As a result, I would say yes. Way back when, I had a time or two within the cockpits of commercial airliners and watched as the pilot would engage the autopilot of aircraft of the late 60's.

I am not an aeronautical engineer, but I can only imagine the improvements in this technology in the intervening 45+ years. With the dramatic decrease of airline accidents since that time, it's hard to argue that autopilot technologies didn't play a role in the increase in flight safety.

And now, after this most recent accident, people are going to seriously ponder whether a computer should be able to take control away from a human. By the way, I liked your question in an earlier post about whether a computer might have figured out the best course of action in a 2009 crash had it been given a chance.

One other curiosity about the Air France disaster was the design of the cockpit which meant that the pilot seated behind the two co-pilots (he had gone for a sleep barely an hour into the flight) could not see that the co-pilot in control of the plane had his right hand on the control stick which in other cockpits is in the middle and thus visible to other members of the crew, and it being pitch dark outside they had no idea what direction they were going in. Anyway I am with Broncofan on this because something can always go wrong with a computer, even if like Trish you kiss and stroke your computer every night before going to bed...just in case...

trish
03-29-2015, 02:24 AM
.. Anyway I am with Broncofan on this because something can always go wrong with a computer, even if like Trish you kiss and stroke your computer every night before going to bed...just in case...
Hey! How'd you know ...? Is this camera on?

Plaything
03-29-2015, 08:09 PM
There would appear to be a consensus here that ASI is, at best, a double edged sword - I can only stand back and admire the various well reasoned positions here...by people who clearly understand IT.

I can make it work. That's pretty much it.

But in my business there is, as yet, no likely replacement for flesh and blood. Not sure the algorithm exists that can interpret what an industrial fire will do next.

My real concern is not the possibility of some Asimovian 'I Robot' future, but rather that William Gibson will prove to be absolutely on the money in his book which imagines a dystopian world, 'Mona Lisa Overdrive' - where video games are delivered straight to the cortex.

And this, it would appear, is kind of on the horizon.

When we can all be anything we would like. When virtual reality is indistinguishable from what really 'is'.

Many of us will opt out of 'ordinary' completely.

More addictive than crystal meth...and marketed overtly through mainstream media.

sukumvit boy
03-29-2015, 11:01 PM
Robots learn to cook by watching YouTube videos , sponsored by DARPA ! I guess the army needs more cooks?
I posted this earlier in another thread
http://cmns.umd.edu/news-events/features/2708
http://rt.com/news/219687-robot-learns-watching-video/
http://www.homecrux.com/2015/01/22/24542/pioneering-robotic-chefs-cook-humans-watching-youtube-videos.html

Odelay
03-30-2015, 04:26 AM
My real concern is not the possibility of some Asimovian 'I Robot' future, but rather that William Gibson will prove to be absolutely on the money in his book which imagines a dystopian world, 'Mona Lisa Overdrive' - where video games are delivered straight to the cortex.

And this, it would appear, is kind of on the horizon.

When we can all be anything we would like. When virtual reality is indistinguishable from what really 'is'.

Many of us will opt out of 'ordinary' completely.

More addictive than crystal meth...and marketed overtly through mainstream media.

Plaything, this or something similar to this - a giant opt out for large sections of society - will have to happen. The alternative would have to be some large scale genocide or eugenics to trim down a population that will largely be useless compared to what it is today. There simply won't be a sufficient # of intellectually or physically challenging jobs to support even the current 8 billion, much less what the popoulation will be in 20 or 50 years. Might as well keep the masses zoned out on VR or drugs or anything that will keep them docile.

Odelay
03-30-2015, 04:32 AM
Stavros, you're sort of hinting around this but I can't tell now from your recent missive that you would prefer to have humans in the cockpit. What do you think of human override by computers if they detect human action that will soon cause the deaths of 100's of people? If such a human override were possible to write into autopilots, it would seem that not only might the germanwings accident have been avoided, but perhaps the 9/11 actions, as well.

Plaything
03-30-2015, 11:03 AM
Plaything, this or something similar to this - a giant opt out for large sections of society - will have to happen. The alternative would have to be some large scale genocide or eugenics to trim down a population that will largely be useless compared to what it is today. There simply won't be a sufficient # of intellectually or physically challenging jobs to support even the current 8 billion, much less what the popoulation will be in 20 or 50 years. Might as well keep the masses zoned out on VR or drugs or anything that will keep them docile.

I guess you can't make an omelette without breaking a few eggs...

Western Economic Policies are already moving clearly and inexorably toward 'I'm all right Jack, pull the ladder up'...

Of course, this is just another commercial revolution.

Easy to look back at, for example, The age of steam as an interesting historical landmark

Here we go again.

There will be winners.

And a whole shit load of losers.

martin48
03-30-2015, 03:09 PM
Following tragic crash in the French Alps (http://www.bbc.co.uk/news/world-europe-32113507) I may chose the robot.




There would probably have to be systems in place so that a human could take control from a computer that is not functioning properly? How do we ensure the safety of that, if we can't ever trust people? What about hackers? Until a computer reaches a threshold of consciousness and refined learning ability, isn't there always someone who could tinker with the program or the operation of it?

When it reaches a state of consciousness and does not need further programming, why would it then be immune to the bad decisions of other conscious, autonomous beings?

I would trust a computer to fly, but I would't mind there being a trustworthy human backup.

Plaything
03-30-2015, 03:13 PM
Following tragic crash in the French Alps (http://www.bbc.co.uk/news/world-europe-32113507) I may chose the robot.

Trains are nice...

Stavros
03-30-2015, 03:27 PM
Stavros, you're sort of hinting around this but I can't tell now from your recent missive that you would prefer to have humans in the cockpit. What do you think of human override by computers if they detect human action that will soon cause the deaths of 100's of people? If such a human override were possible to write into autopilots, it would seem that not only might the germanwings accident have been avoided, but perhaps the 9/11 actions, as well.

Right now no, computers are useful but if the computer that barks at me 'Unexpected item in bagging area' when there is no such item and won't let me leave the store...no, it doesn't give me enough confidence. The solution is in cockpit design, real-time communications between the aeroplane and the nearest air traffic control -real time video streaming?-and safeguards to protect the passengers from loony crews, but is any system of any kind 100% safe?

trish
03-30-2015, 03:47 PM
Those of us who are lucky find a lot of psychological support, meaning and self-definition in our jobs. If we don’t find it in our jobs per se we find it in the fact that by having jobs we contribute to the support of our families and the economy of our communities.

There was a time when science enthusiasts naively praised the coming age of leisure which automation will make possible. Imagine a world in which people only have to work a few hours a week. Or not at all. Imagine a world where factories are “manned” by robots and offices are managed by computers and professors are online algorithms. We can devote all our time to the pursuit of art, sport, philosophy, self-improvement and self-governance. Oh, and of course, drugs, sex and serious partying. An entire population immersed within a life of full-time leisure would have to be capable of finding meaning in the pursuit of leisure. Though I’m pretty sure I’m up to it, I think society is not. We do not value leisurely pursuits in the same way we value other pursuits. We think of leisure as a sort of vice. It’s okay in small doses. But a life of leisure is a life (according to the work ethic) not worth living.

Who can see the future? Not me. Possibly automation will relieve all of us from the human labor that used to be required to ensure our survival. The first question we will have to face is: “If you don’t have a job how are to supposed to survive?” We will have to re-evaluate the political prejudices which shape our current economy. We will have to re-evaluate what it is that makes life meaningful. When we all become freeloaders we will have to re-evaluate the value of what freeloaders do.

Even AI’s will have to deal with the problem of self-worth. In Gibson’s Neuromancer the AI known as Wintermute felt incomplete. If I recall (its been a decade since I read it) it wanted to “merge” with Neuromancer. Besides engaging in a lot of nefarious manipulation, Wintermute dealt with its self-perceived inadequacies by constructing found art sculptures in the style of Joseph Cornell.

martin48
03-30-2015, 04:42 PM
Deep points, but if we developed super intelligent machines what use would they have for us?

Just as some people push for apes and other animals to be granted human rights, so they will urge robots to be given rights. With rights come obligations - never certain what they are for my pet dog.

When to robots get the right to vote? Have they got to be 21 or over?

Does high AI mean that robots develop personality traits and issues (ah, issues - a characteristic so far limited to the human species)?

Will they become paranoid like Marvin in Hitchhiker's Guide to the Galaxy?

Marvin: "I am at a rough estimate thirty billion times more intelligent than you. Let me give you an example. Think of a number, any number."
Zem: "Er, five."
Marvin: "Wrong. You see?" (http://refspace.com/quotes/Douglas_Adams/Q927)

trish
03-30-2015, 05:08 PM
My guess is that ultimately we'll be doin' all the shit work while the super-intelligent AI's will be drinkin' motor oil martinis, partyin' hardware and spendin' bit coins by the gig on high class ladybots like they goin' out of style.

martin48
03-30-2015, 05:26 PM
So, no change then!


My guess is that ultimately we'll be doin' all the shit work while the super-intelligent AI's will be drinkin' motor oil martinis, partyin' hardware and spendin' bit coins by the gig on high class ladybots like they goin' out of style.

Plaything
03-30-2015, 06:19 PM
My dog is 'obliged' not to shit in the house.

Otherwise it has the 'right' to long spells of healthy fresh air, and a rewarding outdoors existance.

I concede that I may have drifted off topic...

martin48
03-30-2015, 06:28 PM
Drifting off - no problems. Trish and I do it all the time. We'll hijack anything


My dog is 'obliged' not to shit in the house.

Otherwise it has the 'right' to long spells of healthy fresh air, and a rewarding outdoors existance.

I concede that I may have drifted off topic...

trish
03-30-2015, 06:35 PM
We'll hijack anythingHomeland Security, NSA and FBI, please be advised: THIS IS NOT TRUE! Martin is only joking. Really. Okay, okay, he did it. It wasn't me. Honest.

Odelay
03-31-2015, 02:51 AM
Homeland Security, NSA and FBI, please be advised: THIS IS NOT TRUE! Martin is only joking. Really. Okay, okay, he did it. It wasn't me. Honest.
Right! After today we have one data point that hijacking the NSA is not as easy as it may appear in the movies. On the other hand, infiltrating the White House seems to be easier than what it appears to be in the movies. Go figure.

martin48
03-31-2015, 08:31 AM
Well, that's one secret sleeper cell that busted


Homeland Security, NSA and FBI, please be advised: THIS IS NOT TRUE! Martin is only joking. Really. Okay, okay, he did it. It wasn't me. Honest.

broncofan
03-31-2015, 09:36 AM
Right now no, computers are useful but if the computer that barks at me 'Unexpected item in bagging area' when there is no such item and won't let me leave the store...no, it doesn't give me enough confidence. The solution is in cockpit design, real-time communications between the aeroplane and the nearest air traffic control -real time video streaming?-and safeguards to protect the passengers from loony crews, but is any system of any kind 100% safe?
I agree. I used to have a bad phobia of flying. When I looked up the statistics years ago, among most U.S airlines, the chances of a crash were 1 in several million (maybe 5 million or more). There are specific functions that computers perform very efficiently and perhaps a fail safe mechanism that prevents a nosedive would be useful, but any change to existing protocols could in theory cause a net decrease in safety.

I have an automobile that issues warnings every time I do something it thinks is unsafe. For instance, if the car registers the tire pressure as low, I am not allowed to shift it into a faster gear. However, occasionally it registers low tire pressure even though when I go to the mechanic, the tire pressure is normal. So it is the electronic system (the computer) and not the tire pressure that needs to be reset. I would be concerned about the tradeoff of any attempt to inhibit pilots from being able to perform functions that in their judgment are necessary. Of course, a fail safe that identifies and prevents clearly unsafe maneuvers would be useful, but what are the odds that it registers something as unsafe that could under a rare set of circumstances be necessary?

After all, pilots intentionally crashing commercial airliners is very rare. We've seen 3 or so of them in how many tens of millions and yet we are considering fail safe mechanisms that could have tradeoffs...even the protocol that enabled the crash was intended to prevent another unlikely event....unauthorized people entering the cockpit!

Stavros
03-31-2015, 10:22 AM
Robots learn to cook by watching YouTube videos , sponsored by DARPA ! I guess the army needs more cooks?
I posted this earlier in another thread
http://cmns.umd.edu/news-events/features/2708
http://rt.com/news/219687-robot-learns-watching-video/
http://www.homecrux.com/2015/01/22/24542/pioneering-robotic-chefs-cook-humans-watching-youtube-videos.html

This phrase leap out at me from your first link:
“We are trying to create a technology so that robots eventually can interact with humans,” said Cornelia Fermuller, an associate research scientist at UMIACS. “So they need to understand what humans are doing. For that, we need tools so that the robots can pick up a human’s actions and track them in real time."
-I don't think of my computer as a robot, indeed in my spite of my age, or because of it, I am forever grateful to see the end of typewriters and tippex, but I am not sure I want to interact with a robot. And I don't want a robot watching my every move. I have seen Haley Joel Osment in AI, and it is wrong, very wrong.

Odelay
04-01-2015, 04:39 AM
After all, pilots intentionally crashing commercial airliners is very rare. We've seen 3 or so of them in how many tens of millions and yet we are considering fail safe mechanisms that could have tradeoffs...even the protocol that enabled the crash was intended to prevent another unlikely event....unauthorized people entering the cockpit!
It's funny you should say this because over the last 24 hours or so I was reflecting on my snap opinion and reversed it based on the very point you make. And really, having backup human controls in some remote location or at air traffic towers isn't the answer either. There's enough redundancy, especially with at least 2 pilots in every cockpit - even the small 50 seat jets - that anything more is just overkill. Two pilots, plus the sophisticated computer controls (which I'm sure is far better than what we're seeing in automobiles), is plenty to avoid 99.99% of shenanigans or just plain accidents.

Stavros
05-12-2015, 09:49 AM
This ad aired on the tv the other night, I was taken aback and although I now understand what it's for, I find it an eerie mix of the desirable and the disturbing...tempted to buy one -if I live long enough for it to happen!

https://www.youtube.com/watch?v=vc7k-DwrITI

Stavros
06-07-2015, 09:34 AM
Along with 'Humans' which airs in two weeks, I saw Ex Machina the other day, and then read in today's papers this report of a speech by the Astronomer Royal, Martin Rees on the extent to which the future is mechanical:
“There has been just a thin sliver of time when organic beings have existed and billions of years after machines will take over, so they will be the future.”

The link is here:
http://www.telegraph.co.uk/news/science/space/11657267/Astronomer-Royal-If-we-find-aliens-they-will-be-machines.html

buttslinger
06-07-2015, 10:04 PM
Artificial intelligence won't happen, because real computers have no motive to walk down a crowed street in New York and people watch, they are complete as they are.
Using jumbo computers to fashion genetic engineering, that's different, you could quite possibly program a certain super computer to weed out all the bad DNA crap, and fashion things the humans like: intelligence, looks, health, longevity.
Evolution hasn't changed the alligator of cockroach over millions of years, but in one or two hundred years we could be popping out little master kinders beyond our local imagination. At the very least knock out some diseases or basic genetic flaws.
Better than a computer that can cook your breakfast or fly your plane is a computer that can make your children living breathing Gods. Cue the Twilight Zone music, humanity is flawed. catastrophe awaits.

sukumvit boy
07-02-2015, 04:14 AM
Here's some recent news.
http://www.cnet.com/uk/news/musk-backed-ai-group-to-give-7m-on-artificial-intelligence-research/



Mr. Musk at it again with his , what I find eccentric , campaign about the dangers of AI.
But hey , I grant the guy his quirks. Remembering that Newton ended his life immersed in Alchemy , Alfred Russel Wallace in spiritualism , and Einstein abhorred quantum mechanics.

Stavros
07-02-2015, 03:18 PM
I watched the first episode of Humans, but was not impressed by another version of robots in rebellion, or the heroic individuals fighting the machine...and that's just a tv programme.

What puzzles me about a lot of this AI material is the inability of people to decide if we humans have a soul even when they talk about robots developing initiatives on their own. Humans may be 'machines' in the sense that most of us have a body and internal organs, and need water, and air and food to live, but many scientists cannot explain why one person has creative skills that another does not, or, crucially, why we are all different if our bodies are all the same. Surely the limitation of AI is that it will only have as its historical memory whatever is programmed into it, and that it will not be capable of writing poetry unless it is an imitation of someone else's?

trish
07-02-2015, 05:03 PM
Given the vast chaotic complexity of the world it seems perfectly reasonable to me that two identical artificial intelligences placed in distinct but similar environments might develop entirely different behavior patterns which ultimately cannot be explained in any satisfactory detail. One may write completely original poetry in its very own inexplicable style. The other may develop an obsession for money and power. Once the complexity of a dynamic system passes a certain threshold, it’s behaviors become effectively incalculable. At that point it’s useless to attempt to understand it on the level of switches and circuits. It is more readily understood on the more abstract level of it’s patterns of behaviors. Intentions, goals, and souls are higher level abstractions that clearly apply to the behaviors and personalities of the machines we call people. The question is, “Will it ever become appropriate to seriously apply these concepts to other machines?”

For me the worrisome part of AI is the possibility that some machines will have “souls” in the sense that we do; i.e. we feel, we love, we experience the world and are driven to create art, music, poetry that reflects our inner selves in reaction to those experiences. We are also machines. I see no reason other sorts of machines might not also experience the world in similar ways. The moral danger posed by AI is two-fold: 1) there is the possibility that we may refuse to extend our empathy to machines that deserve it; and 2) there is the possibility that we may grant personhood to machines that are not persons but simply passable simulations. (Btw in other circumstances I would find the phrase “passable simulation” somewhat toxic.)

buttslinger
07-02-2015, 06:57 PM
If you considered DNA as computer code, then it is conceivable that one day they could construct a computer clone of a human, or a dog, or a tree. You could even network in some raven DNA or dolphin DNA. You could even un-program disease and even death, in a laboratory setting.
I think even a blade of grass has self awareness, it has a desire to live and procreate, it will alter it's own DNA to adapt to it's environment. It has no need to talk to me. I think it feels things, just totally different than humanoids.
95% of yoga is breathing, and the motive is the end of suffering, not super intelligence, so somewhere along the line the stress of constructing a smart toaster might make us re-examine our motives, why would a real life Cmdr Data give a shit about us? What if GORT grows some balls and starts blasting people for kicks?
If a SUPER computer is allowed to reprogram itself, ,,,hmm, I don't think any scientist alive now or in the future would allow that, without a kill switch real close by.
If you gave a computer lungs of some sort, I guess you could manufacture a soul, and since any computer made now can beat me in chess EVERY TIME, in that sense we already have intelligent computers.
One Day in the far future I suppose you could build a huge computer in Lucerne that would be the ultimate ORACLE for mankind, curing cancer, running the IRS, exploring our history and all the galaxies, but to the computer that stuff would be like paying the rent, answering the mysteries of the universe in exchange for electricity and lubricating oil. Housework. Who knows what motives an all-knowing entity would have?

fred41
07-02-2015, 07:56 PM
I think for a machine to have feelings, we would have to be able to create an artificial equivalent to things that cause chemical reactions in our brains, such as hormones, that can create certain emotions, like - happiness, anger, pleasure and love. For instance, a machine might be programmed to provide a particular service, but unless there is an induced reward system to create a feeling of contentment or pleasure, there would never be any real self satisfaction for providing that service...conversely, there would also be no feelings of regret or anger either.

I also believe that is what a 'soul' is...everyone's own individual DNA and environmentally induced internal chemical factory.

(i'm probably vastly oversimplifying this...:) )
.

Stavros
07-03-2015, 09:21 AM
I think for a machine to have feelings, we would have to be able to create an artificial equivalent to things that cause chemical reactions in our brains, such as hormones, that can create certain emotions, like - happiness, anger, pleasure and love. For instance, a machine might be programmed to provide a particular service, but unless there is an induced reward system to create a feeling of contentment or pleasure, there would never be any real self satisfaction for providing that service...conversely, there would also be no feelings of regret or anger either.

I also believe that is what a 'soul' is...everyone's own individual DNA and environmentally induced internal chemical factory.

(i'm probably vastly oversimplifying this...:) )
.

Or you could have a situation in which an AI becomes the perfect killer, programmed to do nothing else without a thought or an emotion involved, as indeed is the kind of AI one sees in those trashy films of recent years.

If I say that I find your definition of the soul unsatisfactory, it is equally unsatisfactory if I cannot produce a better alternative -perhaps the question is not what the soul might be, but whether or not it exists at all, something which science has failed to conclusively prove one way or another.

Stavros
07-03-2015, 10:35 AM
Given the vast chaotic complexity of the world it seems perfectly reasonable to me that two identical artificial intelligences placed in distinct but similar environments might develop entirely different behavior patterns which ultimately cannot be explained in any satisfactory detail. One may write completely original poetry in its very own inexplicable style. The other may develop an obsession for money and power. Once the complexity of a dynamic system passes a certain threshold, it’s behaviors become effectively incalculable. At that point it’s useless to attempt to understand it on the level of switches and circuits. It is more readily understood on the more abstract level of it’s patterns of behaviors. Intentions, goals, and souls are higher level abstractions that clearly apply to the behaviors and personalities of the machines we call people. The question is, “Will it ever become appropriate to seriously apply these concepts to other machines?”

For me the worrisome part of AI is the possibility that some machines will have “souls” in the sense that we do; i.e. we feel, we love, we experience the world and are driven to create art, music, poetry that reflects our inner selves in reaction to those experiences. We are also machines. I see no reason other sorts of machines might not also experience the world in similar ways. The moral danger posed by AI is two-fold: 1) there is the possibility that we may refuse to extend our empathy to machines that deserve it; and 2) there is the possibility that we may grant personhood to machines that are not persons but simply passable simulations. (Btw in other circumstances I would find the phrase “passable simulation” somewhat toxic.)

This is the kind of post that to me illustrates the weakness of science when it attempts to deal with the soul, because in fact if we are machines, then it is difference that is inexplicable, not poetry. To claim that it is "perfectly reasonable to me that two identical artificial intelligences placed in distinct but similar environments might develop entirely different behavior patterns which ultimately cannot be explained in any satisfactory detail" is gibberish. Either the AI are identical or they not, and surely it is precisely because the clothing and diet of the Inupiat is so different from the Masai that we try to understand both without resorting to a crude environmental determinism -A1 wears a lot of clothes because it is cold; A2 wears few clothes because it is so hot. It is true that from Roman Jakobson through Levi-Strauss to the universal pragmatics of Habermas, that studies of language have attempted to illuminate the structural affinities that human languages have with each other, and one could argue that most religions attempt to do the same thing and come up with structurally the same solution -that there is a perfect being and that it has created a system of punishment and reward for humans that helps societies survive without collapsing into chaos. But within all that, the unique signature of the creative artist begs the question: why is it even unique?

Scientists it seems to me, tends to reconfigure everything in the world in terms of mathematics- take as an example the famous Infinite Monkey Theorem in which a monkey, say a Chimpanzee sitting in front of a typewriter will eventually produce the complete works of Shakespeare. The theorem works on the level of maths or as we would put it today, algorithms, because there are only so many letters on a keyboard and in the works of Shakespeare and at some point in infinity all of the conceivable permutations would have been typed and there on the page you would have that famous phrase from King Lear: O, let me not be mad, not mad sweet heaven.

Now, suppose an AI is created that is formed as a robot or an android or whatever they are called these days, and into its computerised memory is fed the entire contents of the Library of Congress, the Bodleian, the Bibliotheque Nationale and so on- if this AI then produced a play, would it be original, more importantly, unique? Or would it be creative at all? Mozart used a formula to write music that had been established by Bach and Haydn, but even though he often repeated himself, because he was writing for money much of the time, Mozart stands out in a era of classical music because of those moments -to enthusiasts, exquisite moments- which only Mozart could have written -a blend of chords, a melodic line: it is this ability to creative something unique that others can still appreciate and understand that AI cannot produce, because a robot does not have a soul.

I agree that I would struggle to define what a soul is; a psychologist once admitted to me that his profession is unable to define a person, perhaps because humans can not only create a persona that is unique to them, but to create more than one -such as Michael on Monday who becomes Michelle on Friday, even if only in a nightclub.

So in fact, an AI might indeed have a 'soul', but could it ever have a soul?

trish
07-03-2015, 06:00 PM
This is the kind of post that to me illustrates the weakness of science when it attempts to deal with the soul,...
Indeed, I do not think science has much to say on the subject. There are few if any refereed papers in scientific journals which make any pronouncements on the existence or non-existence of souls.


To claim that it is "perfectly reasonable to me that two identical artificial intelligences placed in distinct but similar environments might develop entirely different behavior patterns which ultimately cannot be explained in any satisfactory detail" is gibberish.

I’m sorry that you find it so. Perhaps the claim lost clarity through my attempt to write tersely. The word “similar” is meant to convey something less than “identical”. So two identical machines in only similar environments will eventually (if they are designed to interact with the environment in significant ways) display divergent behaviors because of what is popularly known as the butterfly effect. Even were the universe to unfold in a deterministic way (which I don’t necessarily believe) it will be impossible over the long run to calculate and predict the precise ways in which the behaviors of the two machines will diverge. If the machines themselves were only similar, rather than identical, the problem of understanding, calculating and predicting with exactitude the nature of their divergence would be compounded. [Even a system as simple as the solar system is chaotic in this sense. The paths of the celestial bodies can only be reliably predicted over finite periods of time and they are subject to sudden and sometimes catastrophic interruptions from cosmic interlopers.]


Scientists it seems to me, tends to reconfigure everything in the world in terms of mathematics- take as an example the famous Infinite Monkey Theorem in which a monkey, say a Chimpanzee sitting in front of a typewriter will eventually produce the complete works of Shakespeare. The theorem works on the level of maths or as we would put it today, algorithms, because there are only so many letters on a keyboard and in the works of Shakespeare and at some point in infinity all of the conceivable permutations would have been typed and there on the page you would have that famous phrase from King Lear: O, let me not be mad, not mad sweet heaven.

It’s certainly difficult to deny the mathematics. But of course there are only a finite number of Monkeys on Earth and the expected amount of time it would take for them to randomly produce a line of Shakespeare exceeds the time it will take for the Sun to nova.


Now, suppose an AI is created that is formed as a robot or an android or whatever they are called these days, and into its computerised memory is fed the entire contents of the Library of Congress, the Bodleian, the Bibliotheque Nationale and so on- if this AI then produced a play, would it be original, more importantly, unique? Or would it be creative at all? Mozart used a formula to write music that had been established by Bach and Haydn, but even though he often repeated himself, because he was writing for money much of the time, Mozart stands out in a era of classical music because of those moments -to enthusiasts, exquisite moments- which only Mozart could have written -a blend of chords, a melodic line: it is this ability to creative something unique that others can still appreciate and understand that AI cannot produce, because a robot does not have a soul.
Should an machine other than a human being produce a play would it be unique? Does producing a play make a human being unique? I think we agree on the answer here. It doesn’t seem to me that the production of various works of art is sufficient proof of sentience. I’m not a subscriber to the Turing Test. I think people are prone to anthropomorphize and attribute human qualities to creatures (and perhaps things) that do not have those qualities. But this doesn’t prove that machines can’t be sentient or conscious. It only demonstrates the difficulty of deciding whether or not a particular machine is such. I know first hand that I am sentient. I have to take your word for it that you are sentient (and I do, even though we never met and you may be just a algorithm running on the internet). How do we decide the sentience of others?

Intention, desire, empathy, jealously etc. are some of the higher level concepts we employ when we attempt to understand why the people we encounter behave the way they do. When the boss fires you, you want to know what he’s thinking, not what neuronal complexes are firing. One high level concept some people seem to find useful in understanding other human beings is that of the “soul.” I never became very adept at the use of this concept. I don’t believe it brings very much to the discussion of poetry, painting, writing, creativity, moral and ethical philosophy, the meaning of life or the nature of consciousness and sentience. Those who subscribe to the notion seem to think that a physical system cannot be sentient, creative, loving and unique unless a divine being has installed a soul somewhere within it. At least there’s one thing upon which we can agree: one person’s gibberish is another person’s chatter.

buttslinger
07-03-2015, 07:58 PM
Nobody with a trillion dollars to spend is going to want a soul or a play, or some poetry, if I want a soul to own I'll get a cat and give it food and neck rubs, he'll stick around.
If you build a computer you want one that is smart, smarter than the IBM computer WATSON who won on Jeopardy. You want one that can keep the North Koreans from hacking our computers, you don't want a master computer that decides we should go through channels and give Iran a few nuclear missiles in the pursuit of fairness.
A creative computer might earn some young genius a blue ribbon at the science fair to please his parents, but if you're going to put thousands of man hours and countless headaches into building a computer that can do anything, you're going to keep strict control over what it does, and you're going to want to get a return on your investment, whether it's ruling the stock market, or destroying ISIS.
Of course this is vastly oversimplified, but I'm sure there have been talks among the techs in the white coats about building a supercomputer to deal with national defense, as well as models for a smart car, pollution concerns, and just like the A-bomb, getting a genius computer before the Chinese do. WWIII is going to be fought in the banks. The trick will always be staying one step ahead of the competition.
Picasso, Shakespeare, Mozart....there's a lot of pain and death in their art. They accept the fact that there is nothing new under the sun.
The Atomic bomb was a little piece of sunshine right here on earth, under our control. A supercomputer will be a super high voltage powerhouse that always is on and never gets tired, in our control. If it ever becomes self aware, some technician is going to be in deep doodoo. The parents of a super computer will be greed and fear.

sukumvit boy
07-03-2015, 09:12 PM
Robby the Robot ! Love those classic pic posts .

trish
07-04-2015, 05:20 PM
if you're going to put thousands of man hours and countless headaches into building a computer that can do anything, you're going to keep strict control over what it does, and you're going to want to get a return on your investment, whether it's ruling the stock market, or destroying ISIS.
That is exactly right. The things we tend to study and particularly the things we design are predictable. The computer you’re using to interface with HA does (by and large) what you tell it to do. But the complexity of the world is such that most interactions are not predictable to that degree. A small perturbation in input can yield exponentially divergent output. Essentially, in the real world if you do the same thing over and over again, you shouldn’t be surprised if you sometimes get different results. This is partly why siblings raised in the same environment by the same parents grow up with different interests, loves, personalities, talents and abilities; and also partly because they are not identical to begin with.

We encourage our children to be unique, creative and open to the possibilities of the world. We want our servants to be obedient and predictable. When we design machines, we design servants. But nature is not in business of producing servants and slaves for profit; and ultimately we are all of us, man and machine, products of nature. Shit happens.

Stavros
07-04-2015, 09:10 PM
I’m sorry that you find it so. Perhaps the claim lost clarity through my attempt to write tersely. The word “similar” is meant to convey something less than “identical”. So two identical machines in only similar environments will eventually (if they are designed to interact with the environment in significant ways) display divergent behaviors because of what is popularly known as the butterfly effect. Even were the universe to unfold in a deterministic way (which I don’t necessarily believe) it will be impossible over the long run to calculate and predict the precise ways in which the behaviors of the two machines will diverge. If the machines themselves were only similar, rather than identical, the problem of understanding, calculating and predicting with exactitude the nature of their divergence would be compounded. [Even a system as simple as the solar system is chaotic in this sense. The paths of the celestial bodies can only be reliably predicted over finite periods of time and they are subject to sudden and sometimes catastrophic interruptions from cosmic interlopers.]


It may be that I have a narrower concept of AI than yours, for example AI as something manufactured by a company which produces say 1,000 identical machines, and just as one expects every Apple Air to be the same whether it is bought in London or Chicago, so the AI produced by Stark Industries would all be identical down to the last detail. From this perspective, it is surely nonsense to believe that two identical machines will evolve in any sense or diverge as they are machines with a precise range of functions. The only way they could 'diverge' would be to acquire a mind just as we do, capable of being illogical in away that computers cannot be. Even a command to self-destruct is not illogical to a computer.

The deeper point is the old one about what it is that makes humans different from the other species we share this planet with. The mind or the soul remains the key to this, surely? And it again comes back to the fact that we all have the same working parts yet are also individuals. I don't see how a machine can be either designed or made which is as human as a human, and I am not sure I want AI as anything other than a mindless gadget doing what gadgets do, but that is also because I do not use the cloud, or dropbox, or have my light switches at home programmed to turn on when I open the door; I don't have a sophisticated oven that plays Mozart while heating a pie (the two don't go together anyway). So maybe the problem is that I am just too old..!

trish
07-05-2015, 02:10 AM
It may be that I have a narrower concept of AI than yours, for example AI as something manufactured by a company which produces say 1,000 identical machines, and just as one expects every Apple Air to be the same whether it is bought in London or Chicago, so the AI produced by Stark Industries would all be identical down to the last detail. From this perspective, it is surely nonsense to believe that two identical machines will evolve in any sense or diverge as they are machines with a precise range of functions. The only way they could 'diverge' would be to acquire a mind just as we do, capable of being illogical in away that computers cannot be. Even a command to self-destruct is not illogical to a computer.
Consider that at one time there was a single self-replicating molecule, a nano-machine. It spawned two identical daughters, who each spawned two identical granddaughters. During the course of a few billion years shit happened and the progeny are as divergent as any mind could possibly imagine.

I do find it somewhat amusing that people as celebrated as Stephen Hawking actually worry about the dangers of artificial intelligence. I do not find it very like that sentience can be totally explained as a digital construct, though I do think (indeed I would say know) that dynamical systems can be conscious (I’m one of them). My more immediate worries concerning AIs cluster around the economics of unemployment.

There is a slim possibility that we are both right (or both wrong, depending on how you look at it): There is a divine being and he uploads and installs into each human born a custom designed neural algorithm called a soul. Because it runs on flawed hardware (original sin) it’s prone to malfunction, and because it’s an abstract algorithm it’s immortal.

Stavros
07-05-2015, 02:39 PM
I do not find it very like that sentience can be totally explained as a digital construct, though I do think (indeed I would say know) that dynamical systems can be conscious (I’m one of them). My more immediate worries concerning AIs cluster around the economics of unemployment.


I have never thought of you before as a 'dynamical system' even if you do have a dynamic personality...I think you can do better than that.

martin48
07-10-2015, 01:38 PM
There is a divine being and he uploads and installs into each human born a custom designed neural algorithm called a soul. Because it runs on flawed hardware (original sin) it’s prone to malfunction, and because it’s an abstract algorithm it’s immortal.

I do wish you would keep my flawed hardware out of the argument.

You would think that this divine-being had considered the flawed hardware and designed his/her algorithm to suit.

martin48
07-10-2015, 01:42 PM
Some flawed robotic hardware with an evil designer

trish
07-10-2015, 03:35 PM
Evolution is reproduction through a noisy channel. The flaws are essential :)

martin48
07-10-2015, 04:03 PM
Without perturbations, there would be no variety and no progress.

Laphroaig
07-11-2015, 09:58 AM
“Real stupidity beats artificial intelligence every time.”

― Sir Terry Pratchett, Hogfather

buttslinger
07-11-2015, 09:59 PM
Webster- INTELLIGENCE -the ability to learn or understand things or to deal with new or difficult situations

Intelligence is not an entity unto itself, it's like a zero in math, it only has value when you add it to a number.

Even without a super computer, if you gave a really smart ethical guy complete dictator control over the United States, you could probably fix the military, IRS, budget, etc etc etc in one year. Just impose martial law and make the changes any idiot can see needs to be made.

The Big Bang was caused by a small flaw in the universe, made it spew out all over the place, but it's all a big mistake, the perfect universe still exists, about the size of a softball, spaceless and timeless, and formless in my head. You have to be beyond stupid to see it, you practically have to cut off blood flow to your brain.

It's possible the USA already has a super duper computer or alien technology, and we're waiting for a really good occasion to unveil it. Gotta keep it top secret, ya know. Lives are at stake.

martin48
07-12-2015, 11:25 PM
Interesting theories. Not being an idiot, I can't understand it all

broncofan
07-13-2015, 12:27 AM
about the size of a softball, spaceless and timeless, and formless in my head. You have to be beyond stupid to see it, you practically have to cut off blood flow to your brain.

Anything you have to see without much blood in your brain is more likely a hallucination than the perception of real phenomena. It's just as likely to be a manifestation of aberrant neural activity than a valid percept. If we develop computers capable of generating ideas, let's hope the ideas relate to things that actually exist or could exist.

I was prescribed a toxic dose of a medication once and when I woke up the next morning I saw a translucent spider crawling towards me through the air. I was fully aware it was not an actual spider, but it had all of the qualities of a spider in appearance, save for its translucence. This softball sounds about as real as that spider (even if you meant it as a metaphor I don't know what you mean):).

broncofan
07-13-2015, 12:33 AM
Intelligence is not an entity unto itself, it's like a zero in math, it only has value when you add it to a number. .
And not to be pedantic, but zero does not have value when you add it to a number. Edit: I'm guessing you mean as a placeholder, in which case, I guess it does.

buttslinger
07-13-2015, 04:21 PM
If you add zero to six, the zero becomes a six,
If zero was sleep, and you added it to Jimmy, Jimmy might appear dumb as a post, completely worthless, yet Jimmy is still Jimmy.

Some people talk about trading all their treasures for a perfectly round pearl of great value. Or go off looking for a riddle, a dewdrop on the sea. A computer might do math thousands of times faster than me, or lots of other stuff better than me, a smart guy might be smarter than me about pretty much everything. But I value ME over them, because all the laws of physics and nature say I gotta be me. Reality trumps Intelligence.

I mean, really, most guys here can't get past a dude with tits and a pretty face, right? Is it real or a hallucination?

trish
07-14-2015, 04:21 AM
The value of zero is zero.
The value of zero plus six is six.
The value of six times zero is zero.
The value of six to the zero power is one.
The value of me equals the value you,
but our values can't be added, divided nor on a computer run.

Zero is simply the identity element of the additive group canonically embedded within the complete real closed field. Our field is open, our identity incomplete.

Stavros
07-14-2015, 10:39 AM
The value of me equals the value you,
but our values can't be added, divided nor on a computer run.


So we are not machines, and we have souls -unique souls?

trish
07-14-2015, 10:09 PM
Not all machines are digital computers. The brain has aspects of a neural net, but it's not temporally synchronized, nor completely digital in all of its aspects. This doesn't mean you aren't a physical system. But all that is beside the point: my comment on Buttslinger's post is about value, not the origins of sentience. I'm merely suggesting that the values we attach to things aren't always numbers, or even abstractions that can be easily compared. What, for example, was the value Bikini Atoll before it was obliterated? The U.S. military may have attached a dollar amount to the value. Others would've been hard pressed to to express their value in those terms. But it doesn't mean the Bikini Atoll had a soul. Perhaps it did. Perhaps not. But to say it isn't an electronic digital computer and it has an incomparable value doesn't decide the issue.

Stavros
07-15-2015, 01:04 AM
Trish we are not really getting anywhere with this. Science can tell us so many things about the body, perhaps not as much about the brain as it would like, but is still floundering when it comes to explain why humans with the same constituent parts do not produce the same things over and over again, or rather, as in fact humans are very repetitive creatures, why one human can paint and another cannot, where a unique imagination separates his or her work from everyone else's. Whether you call it the mind or the soul, we are surely more than the sum of our parts. But why?

buttslinger
07-15-2015, 01:47 AM
Stavros and Trish both love to LORD it over us with their engorged vocabularies, I think my only unfinished business here would be to hear Trish admit to at least the POSSIBILITY of a Universal God that starts to cook where our paths end.

trish
07-15-2015, 02:09 AM
Stavros and Trish both love to LORD it over us with their engorged vocabularies, I think my only unfinished business here would be to hear Trish admit to at least the POSSIBILITY of a Universal God that starts to cook where our paths end.
Of course. Way up beyond the clouds in a parallel set of dimensions there may exist a race of spider-legged gods that creep along the cosmic web, oozing out the Nambu strings of which the universe is woven. We worshipfully commune with the awful Mother of that nest by the thin filaments upon which we tug with our prayerful thoughts. The trick is finding reason to believe such a thing and then to act meaningfully on that belief. Should some poor father stand accused of murdering his son because he thought the Web Mother bade him to do it, would you find him guilty or non-guilty by reason of insanity?

buttslinger
07-15-2015, 02:25 AM
Of course. Way up beyond the clouds in a parallel set of dimensions there may exist a race of spider-legged gods that creep ......, would you find him guilty or non-guilty by reason of insanity?

I'd probably find him not guilty, if he killed his wife or business partner I'd find him guilty. Let's play hardball. Are you telling me that all the black people that go to Sunday School are ignorant?

trish
07-15-2015, 02:43 AM
A snow flake consists of nothing but H2O and yet each is unique. Why? Not because it has a soul, but because of the multitudinous variations physics allows in the formation of ice crystals and the multitudinous fluctuations in the ambient environment of these delicate lattices as they grow. So too with living creating creatures.

There are one hundred trillion neurons in a human brain. It would require two to the power of one hundred trillion bits of information just to describe with exactitude the physical state of that machine. Clearly we won’t ever be predicting human behavior on the level of neurons. To understand why humans sometimes write poetry we need higher level concepts. Perhaps ideas belonging to fields other than science. But clearly, with so many available biological variations and so much variation and fluctuation in our environment and our lives as we grow, mature and learn, it is natural to expect that when one human is found who writes poetry, not all will necessarily do so. Souls are not needed to understand the uniqueness of human beings. They may be needed to explain other things about humans, but I haven’t been told yet what that might be, or how such explanations work.


Whether you call it the mind or the soul, we are surely more than the sum of our parts. But why?

I’m not sure what “sum” means here. Surely it doesn’t mean the same as it does in arithmetic. Nor does it mean the same thing as “aggregate”, “collection” or “union.” We are not the mere collection of all of our parts, that’s for certain. But then, neither is a car engine. Take your car apart and put all the pieces into a huge box. It will no longer be your car. A car is not a simple sum of all of its parts. The parts of a car are integrated and interfaced in such a way that the state of the cars is intimately related by the laws of physics to the state of the car at later times.

The question of this thread is can human-designed, man-made machines achieve intelligence? sentience? and if so are they an existential danger to humans? You seem to think that no man-made machine will ever have a soul and so we will never have to grant such a machine our empathy, our sympathy, our respect etc. The hypothesis that thinking beings require divinely bestowed souls grants you a slam-dunk argument. I think it’s very unlikely that humans will ever create sentient machines, but I don’t have a slam-dunk argument that concludes no such thing is possible. Indeed I think our very existence illustrates the principle of a sentient physical system. So I can’t rule the possibility that we may want some day in the future to grant to some of our machines the same respect that we (ought to) afford others.

trish
07-15-2015, 02:48 AM
I'd probably find him not guilty, if he killed his wife or business partner I'd find him guilty. Let's play hardball. Are you telling me that all the black people that go to Sunday School are ignorant? We are all sadly, sadly ignorant. To address your question more directly, many Sunday School attendees have Faith, which ultimately presupposes a kind ignorance...doesn't it?

buttslinger
07-15-2015, 03:49 AM
We are all sadly, sadly ignorant. To address your question more directly, many Sunday School attendees have Faith, which ultimately presupposes a kind ignorance...doesn't it?

Nobody on earth has a Faith that sees beyond personal ignorance?

Stavros
07-15-2015, 12:22 PM
[QUOTE=trish;1618193]
A snow flake consists of nothing but H2O and yet each is unique. Why? Not because it has a soul, but because of the multitudinous variations physics allows in the formation of ice crystals and the multitudinous fluctuations in the ambient environment of these delicate lattices as they grow. So too with living creating creatures.
There are one hundred trillion neurons in a human brain. It would require two to the power of one hundred trillion bits of information just to describe with exactitude the physical state of that machine. Clearly we won’t ever be predicting human behavior on the level of neurons. To understand why humans sometimes write poetry we need higher level concepts. Perhaps ideas belonging to fields other than science. But clearly, with so many available biological variations and so much variation and fluctuation in our environment and our lives as we grow, mature and learn, it is natural to expect that when one human is found who writes poetry, not all will necessarily do so. Souls are not needed to understand the uniqueness of human beings. They may be needed to explain other things about humans, but I haven’t been told yet what that might be, or how such explanations work._

-I understand this argument, and I can see how powerful it is. Take, for example, babies who emerge from the womb without the clearly defined genitals humans ought to have, or who do not have all of their limbs or organs, or who are in some way -terrible expression I know but -'not perfect'. In many cases a genetic explanation will focus on the quality of the father's sperm, whether or not the man and woman were closely related, had medical or genetic problems of their own, and so forth. Science can as you suggest explain that genes are not all identical and that this will result in humans who are physically different from their parents.
However-

I’m not sure what “sum” means here. Surely it doesn’t mean the same as it does in arithmetic. Nor does it mean the same thing as “aggregate”, “collection” or “union.” We are not the mere collection of all of our parts, that’s for certain. But then, neither is a car engine. Take your car apart and put all the pieces into a huge box. It will no longer be your car. A car is not a simple sum of all of its parts. The parts of a car are integrated and interfaced in such a way that the state of the cars is intimately related by the laws of physics to the state of the car at later times.
-This I think is a weak argument, because two cars being driven off the production line must be exactly the same, so that the Ford I take possession of in say, Birmingham, is the same vehicle as the one John takes possession of in London. Just as I would expect an Apple Mac purchased in one shop to be the same as one purchased in another. Machines that have been designed down to the last rivet and chip surely cannot be subject to 'genetic' modification as humans can be?

The question of this thread is can human-designed, man-made machines achieve intelligence? sentience? and if so are they an existential danger to humans? You seem to think that no man-made machine will ever have a soul and so we will never have to grant such a machine our empathy, our sympathy, our respect etc.
-I don't see why a human should be upset if the kettle leaks, and cradle it as a consequence. Brutal as it sounds, I throw that kettle away and buy a new one. I don't do lullabies for vacuum cleaners. Can machines be designed that switch themselves on and make the tea just before you wake up? Yes of course, but it is still dependent on electricity, and on a human putting water and tea in the machine. Can computers be programmed to switch themselves on and perform functions? Yes, but at one point do they 'think for themselves'? To me this is science fiction, but also illogical. I don't see how a machine can evolve by itself, and do not think that AI can make the leap from human dependency to autonomy.

The hypothesis that thinking beings require divinely bestowed souls grants you a slam-dunk argument. I think it’s very unlikely that humans will ever create sentient machines, but I don’t have a slam-dunk argument that concludes no such thing is possible. Indeed I think our very existence illustrates the principle of a sentient physical system. So I can’t rule the possibility that we may want some day in the future to grant to some of our machines the same respect that we (ought to) afford others.
-This is where science and religion cannot meet. Science will argue that the child has a genetic disorder where a religious person will say it is karma, or God's Will. Science is satisfactory when it comes to the physical, but is still struggling to explain the autonomy of the person, of an identity that can be shaped in spite of their bodily reality, such as Michael, who on weekends is Michelle. But religion does not really explain it either, since the conclusion that a situation is 'God's Will' is to me a meaningless statement as I do not know how anyone can know God's Will, not least because this God seems to will things for one person that are opposed by another claiming the same authority.

As I suggested, we are not really making progress with this, although I do think it is an interesting thread.

Stavros
07-15-2015, 12:25 PM
Stavros and Trish both love to LORD it over us with their engorged vocabularies, I think my only unfinished business here would be to hear Trish admit to at least the POSSIBILITY of a Universal God that starts to cook where our paths end.

I would not want to Lord it over you, I hope you agree to that. As for Gods' cooking, I think, on the evidence of the last 5 billions years, one might be inclined to say 'Too much salt, guv'nor.'

trish
07-15-2015, 05:26 PM
Trish said,
I’m not sure what “sum” means here. Surely it doesn’t mean the same as it does in arithmetic. Nor does it mean the same thing as “aggregate”, “collection” or “union.” We are not the mere collection of all of our parts, that’s for certain. But then, neither is a car engine. Take your car apart and put all the pieces into a huge box. It will no longer be your car. A car is not a simple sum of all of its parts. The parts of a car are integrated and interfaced in such a way that the state of the cars is intimately related by the laws of physics to the state of the car at later times.
Stavros replied
-This I think is a weak argument, because two cars being driven off the production line must be exactly the same, so that the Ford I take possession of in say, Birmingham, is the same vehicle as the one John takes possession of in London. Just as I would expect an Apple Mac purchased in one shop to be the same as one purchased in another. Machines that have been designed down to the last rivet and chip surely cannot be subject to 'genetic' modification as humans can be?

It would be a weak argument were its aim to prove we cannot make machines that behave predictably (at least within a reasonable degree of tolerance) in predictable situations; but the example was put forward to illustrate that even cars are not just a simple sum of their parts, thus undermining the position that humans must have souls because each human is more than the sum of her parts.

Let me paraphrase the greater-than-argument. It goes like this. You are more than the sum of your parts. The extra bit (the difference between you and the sum of your parts) must be your soul.

The argument is clearly not meant to apply to cars. To avoid such an application, the premise must interpret the word “sum” in a crucially different way and in doing so, it (the premise) assumes the conclusion (you have a soul) rather than proves it. This logical fallacy is called, “begging the question.”


-I don't see why a human should be upset if the kettle leaks, and cradle it as a consequence. Brutal as it sounds, I throw that kettle away and buy a new one. I don't do lullabies for vacuum cleaners. Can machines be designed that switch themselves on and make the tea just before you wake up? Yes of course, but it is still dependent on electricity, and on a human putting water and tea in the machine. Can computers be programmed to switch themselves on and perform functions? Yes, but at one point do they 'think for themselves'? To me this is science fiction, but also illogical.

Of course I’m not arguing that kettles and vacuum cleaners are sentient; only that some machines are (e.g. us) and that there is a possibility that humans might someday craft sentient machines (not that I think they actually will do so, especially anytime soon).


I don't see how a machine can evolve by itself, and do not think that AI can make the leap from human dependency to autonomy.

Nothing “evolves” by itself; but is induced to evolve by way of myriad of interactions with a complex and chaotic environment. A single human being modifies her outlook on the world and her responses to it not because she is possessed by a divine spirit or is in possession of a soul, but because she interacts with and is influenced by the world around her.


-This is where science and religion cannot meet. Science will argue that the child has a genetic disorder where a religious person will say it is karma, or God's Will. Science is satisfactory when it comes to the physical, but is still struggling to explain the autonomy of the person, of an identity that can be shaped in spite of their bodily reality, such as Michael, who on weekends is Michelle. But religion does not really explain it either, since the conclusion that a situation is 'God's Will' is to me a meaningless statement as I do not know how anyone can know God's Will, not least because this God seems to will things for one person that are opposed by another claiming the same authority.

Let me reiterate that my views in this thread shouldn’t be taken as those of Science. I think we agree that neither science nor “God’s will” satisfactorily explains the phenomena of sentience nor the apparent autonomy of persons. I would extend this judgment of explanatory failure to the soul-hypothesis as well. Given the state of our scientific, theological and philosophical knowledge, shouldn’t we leave open the possibility that some machines might be sentient by virtue of the physical integration of their parts and their complex interaction with an enormous, chaotic and hugely varied world?

Stavros
07-15-2015, 05:40 PM
Of course I’m not arguing that kettles and vacuum cleaners are sentient; only that some machines are (e.g. us) and that there is a possibility that humans might someday craft sentient machines (not that I think they actually will do so, especially anytime soon).


I actually agree with a lot of what you propose, but not the quote above, I just find it too cold (too soulless?) to be described as a machine. It is as cold as that quote attributed to Stalin: the death of an individual is a tragedy, the death of a million is a statistic.
Not a scientific response, I know, but probably the cause of the doubts I have that AI/man-made machines will evolve by themselves. I don't know if it is because I grew up in a world where most people did not have a tv, where computing was rare and robots associated with the B films we saw at Saturday Morning Pictures. But even with the profound changes to AI, I still can't see beyond the buttons and lights. Maybe it's just my age.

trish
07-15-2015, 07:20 PM
Let me be clear. I’m not saying sentience is a matter of computation. The modern theory of computation is a branch of mathematics. Not too long ago it went by the name Recursion Theory. It was founded by Turing, Godel, Church, Kleene and others as a sub-branch of mathematical logic within the field of pure mathematics. Were sentience simply a computational matter, an emergent property of a class of Turing machines (or their equivalent), then the study of consciousness would be reduced to a branch of pure mathematics. There would be no experiments to perform; just definitions to delineate and theorems to prove. For no good reason, my intuition runs counter to this. I do not think mathematics alone can encompass what we call sentience.

I think of sentience as a natural phenomenon. To me the world is not cold. It is complicated; complex to the point of being incalculable. It is a web of difficult and incomprehensible things: matter, energy, spacetime, fields and all the things embedded within and constituted by these things; all interacting in quantum bizarre and geometrically convoluted ways. Mathematics is cold and abstract. The world of things is hot, vast and filled with possibility.

Perhaps our difference just lies in what we understand a machine to be. For me, just about anything in the natural world that transitions from one state to the next according to natural laws as it interacts with the world is a machine. I’m inclined to the intuition that everything in the world belongs to the world (i.e. is natural). You, perhaps, are inclined to the intuition that some things (souls in particular) do not (i.e. some things are supernatural).

Perhaps we simply disagree on what “natural” means. I’m inclined to think that should it be proven that humans have souls (I’m not holding my breath), then through that proof we will have some footing toward working out how souls interact with their hosts, with each other and the rest of the world. We will begin to discover how they fit and function in the natural world; i.e. souls themselves would be understood, not as supernatural, but natural entities that belong to the world and arise from the natural world. Your inclination may be to draw a line and divide existence between the natural and the unnatural. Mine is to erase the line and let nature encompass all.

Even allowing for these different perspectives, our disagreement on the existence of souls currently remains a substantial one.

martin48
07-16-2015, 12:42 PM
So we do not understand fully how our brains work, we do not understand fully the influence of nature and nurture, but we can observe the amazing similarities of identical twins who are raised apart. Nature – our genetic base – probably has more influence that we think.

So individuals are different in behaviour and beliefs; that is understandable. We do exhibit, or we believe we do, freewill.

If there are gaps in our observable knowledge then why jump to filling in this gaps with something called the soul? What is our evidence? It is a seemingly useful handle to cover our ignorance. But that is exactly what it is, something that keeps us ignorant.

fred41
07-17-2015, 03:04 AM
A person is given a genetic blueprint and so a starting course is already written out...but prenatal and postnatal environmental effects further shape a person whose every decision during his/her early life can alter his/her personality and who they 'are'.
A person with an identical twin with developed schizophrenia has, if I read it correctly, a 48% chance of also having the disease based on genetics. Compared to the average person this is quite high, but note that it also means the person has a more than half a chance of not getting the sickness (or activating it). Doctors believe prenatal conditions, early childhood diseases, high levels of stress, substance abuse, avoiding social interactions...in other words - environmental factors, all play a strong role in whether or not a person with a predisposition to the illness will actually develop it. My point is that - how a person develops emotionally can be tweaked in so many nuanced ways, by so many factors that the explanation of a 'soul' is almost entirely unnecessary. How the mind and body develop is complex because living beings are complex. (BTW do identical twins share the same soul that was split?)
There is no proof that an actual souls exists. It's just another one of those things that no one ever saw, felt, heard or tasted...but someone first came up with the idea, somewhere in time and now people continue to believe in it based on faith. Sure, science can't disprove the existence of a soul...but it can't disprove the existence of pixies either...they're just real good at hiding.
There is no war between 'science' and religion. Science is a tool, a continuous gathering of information. Religion is faith in the supernatural...it's spiritual.
But people will always believe what they want to believe...they prefer romance to facts. It's why Uri Geller still cons people even though James Randi has shown him to be a fraud.
We don't understand everything in life and we may never completely understand life itself. To that end I guess you can use the word "soul" to mean - the thing that gives a person the spark of life, much as Victor Frankenstein's monster used electricity...the energy that activated him.
But that's not the same as defining the word soul as a spiritual mass of feeling and personality...that is, in fact, the 'real' person, with or without flesh.
The earth is an incredible place when you think about it...even in that 'cold' scientific way. Everything plays a part in everything else....and it's incredible how cells, organisms, beings adapt and change as is necessary for life...souls or not. God or not.
I no longer believe in a monotheistic God.
but I still feel wonder when I see a deer or a family of foxes...or a shooting star...or hold someones hand. I don't need the added romance or poetry of faith in a higher being. What's in front of me in this life is good enough for me.

fred41
07-17-2015, 03:22 AM
Went a bit off track there. To bring it back in context of the thread, I would say, I don't have a problem labeling the power that an artificial sentient being uses to stay 'alive' being labeled a 'soul'. But a consciousness would be something different all together.

Stavros
07-17-2015, 02:56 PM
By way of responding to the various posts above, I should apologise for not always being coherent on this subject. It is something I have occasionally thought about, but probably because I shall not live long enough to see the quantum leaps in computing/AI that we are promised, I tend to think about AI as a social issue, such as the mechanisation removing humans from the production of commodities, and the challenge this poses for the state, so that it is more political than scientific.

I tend therefore to think of this by using cars and computers as examples: if I buy two Apple laptops I expect both of them to function in the same way, as man-made machines designed to be identical in every way. Trish makes the valid point that while humans can be viewed as machines with the same working parts, in reality, just as two snowflakes made by the same process look different, the multiplicity of neurons and other components to a human mean that like snowflakes we will be simultaneously the same but different, just as apparently, twins can exhibit remarkable duplications of thought feeling and behaviour, even if in some other aspects they retain a degree of individuality. One notes, as an aside, that in some ancient cultures, twins were considered a curse or a calamity to the extent that one of the two might be killed at birth. Rene Girard in Violence and the Sacred (1972, page 56) uses this in his discussion of mimesis as both a building block of human societies but also one of its potential weaknesses, as it leads to envy, covetousness and its expression in violence.

The argument in science that the 'soul' does not exist because observer-dependent science has not found it has been challenged by some scientists. Thus, in an article in Psychology Today Robert Lanza points out that 'weirdness' is as much a part of quantum theory as rationality, thus:
While neuroscience has made tremendous progress illuminating the functioning of the brain, why we have a subjective experience remains mysterious. The problem of the soul lies exactly here, in understanding the nature of the self, the "I" in existence that feels and lives life.

Scientists do rely on probablity rather than evidence as an explanation for phenomena that they know is happening but which cannot be seen, yet this is not considered irrational or 'unscientific'. Whether or not Lanza is stretching the boundaries to include 'the soul' in science you can judge for yourself here:
https://www.psychologytoday.com/blog/biocentrism/201112/does-the-soul-exist-evidence-says-yes

The social level at which this becomes interesting to me relates to the issue of social change and what Marx called the 'means of production' and the 'social relations of production'. Marx relied to a great extent on a flawed processional view of history which begins with primitive communism, and moves through revolutionary phases to feudalism, to capitalism (mercantile capitalism followed by industrial capitalism and for some later thinkers monopoly capitalism etc) to socialism and ultimately to communism, thereby reproducing in material terms Hegel's concept of consciousness as something that begins as nothing and through multiple stages of challenge and change matures and grows until it expands to a state of absolute consciousness that has been described as 'Hegel's journey toward the sunlight', but which might in other terms be Nirvana.

Crucially, what Marx attempted to do in volumes 1 and 2 of Capital was to show how human beings who had at one time made their own tools, farmed the land for their food, made their own clothes from animals and crops, etc, find themselves in capitalist societies where their tools and their expertise has been transformed into a significantly more productive machine, to which they have become merely an appendage, required to push buttons or perform the same menial task a million times in a 16 hour day. Marx believed this mode of production -in his case, factory production- and the social relations in which it took place, created a form of reification in which the relationship of people takes on the appearance of a relationship between things, or to put it another way, human communities defined by human identity are replaced by networks of monetized linkages. Marx believed Hegel had consciousness upside-down, and that rather than seeing everything as a product of the mind, Marx saw the material world as the source of consciousness and thus argued that the dehumanization of the worker in a factory was possible because the worker's consciousness of himself as a free person had been crushed, he became a wage-slave -but that by bringing a collective of workers together in one place, a 'working class' or 'collective consciousness' became possible which, if becoming political, could revolutionize both capitalist production and social relations, and push human society out of capitalism and into a wholly new experience of life.

Curiously, Marx also seems to see humans as machines, in the sense that the working class is forced to do the same mechanical things all day and every day, and also sees collective action as the source of hope for revolutionary change, where history suggests that the kind of revolution Marx advocated has ended up creating societies where individual identity is considered such a threat to the organization and survival of the state that such people are physically removed either through murder, or by sending them to the Gulag.

But does not religion also impose collective identity on individuals, and go further to argue that to be a Christian, a Jew, a Muslim etc, does not just mean believing the same things, but behaving in the same way? Is this a way in which human beings are again presumed to be 'merely machines' by movements which claim to have identified the author of the body and the soul? Religions may not describe their believers as machines, but seem to think they should behave in an identical or regimented manner -to attend Synagogue or Mass, or Friday prayers, to fast at a particular time and so on. To believe man and woman are the 'natural' order of things and that any other variation is a violation of 'God's law'.

So that in practice, science and religion actually converge on a wide range of beliefs and practices. And for the most part, it seems to be either a denial of diversity and individuality -communism, religion- or a denial that the self even exists other than as the behaviour of molecules and neurons in the brain transformed into language.

From this point of view, the soul need not be a separate thing from the body, but the means whereby an individual exercises the reflexivity of thought and feeling that enables that individual to make decisions -practical, moral etc- some of which are determined by the body, some by society, but some by an unseen probability which enables us to identify one person as different from another, just as it enables two men, William Shakespeare and Christopher Marlowe, to write at the same time indeed, on the same day, but with different degrees of literary and theatrical skill.

Taken into AI, the only way a machine could become 'sentient' would be if a human were to interfere with a design whose intention is to limit a machine's functions and give it the power to expand into an autonomous AI 'creature', just as in literature and film robots who rebel, or 'resurrected dinosaurs' who become carnivorous have been made that way, by mad scientists or through greed. One notes, as an aside, that Robots don't tend to be interested in world peace or love and compassion, but that must be a reflection of their creators prejudice.

But in a world which wants to re-clone the woolly Mammoth, re-introduce wolves into Britain (no thanks!), maybe the ultimate question is not, can we trust machines, but can we trust humans? After all, we have invented nuclear weapons, and continue to develop them, as if we were sleepwalking into our own destruction.

Finally, given that for some the soul is the proof of eternal life, is it not the case that a computer could in theory function for an eternity -at least as long as the sun shines?

buttslinger
07-17-2015, 05:07 PM
ATTENTION SOULESS BASTARDS!!!

Greetings and Salutations to you all!

Religion is the opiate of the people-K Marx

INDEED


If only one person who ever walked the face of the earth was a witness to GOD, the God exists. Period.
Not the God who left you high and dry when you really NEEDED that bicycle in third grade, not the God that people believe in but don't understand, THE GOD.
I am not saying that everyone should run off to a Zen Monastery, I'm saying if you spent eight years of your life training to be a Marathon runner, and you achieved your goal by winning the Olympics, then all that stupid training and sacrifice might have been worth the effort and time spent. To you. The fact that the losers in the stands around you couldn't fully understand it wouldn't mean much, but you probably would wish that they could understand what you were feeling.
Even people that have seen God don't understand it, It surpasses understanding. Just because you can't put it in a test tube ,,,blah blah blah.

I am ancient enough to remember reading that egg farmers had some site on the internet that would instantly bring them every mention of the word "EGG" in the daily papers. (early google)
And reading that airplane designers didn't need wind tunnels because they could do those tests on a computer.
For me, that's pretty good, the fact that humans have their greasy fingerprints all over keyboards and internets is good. Computers enhance human intelligence.
Of course they also made it possible for some pricks to steal my Mom's IRS return.
Just like God is our Father, Computers are looking like they will be the father of our puny brains, the destinations our logic would go to if we had the ability to go that far and fast. So that's pretty exciting. I can't imagine any computer that wouldn't have guys plugged into it, and so far, yeah, Self Aware Computers exist only in the imaginations of science fiction writers. If some computer pushes the button that launches all the ICBMs in the American arsenal, I would still call that human error.

Even if by chance some electrical aura becomes self aware, I doubt it would destroy the human race or even care about the human race. We tend to be very foolish about our own importance.

trish
07-17-2015, 09:50 PM
If only one person who ever walked the face of the earth was a witness to GOD, the God exists. Period. Agreed. This is a tautology. If I witness a tree, then a tree existed at the time and place I witnessed it. But I actually have to witness a tree, and not a cardboard cutout of a tree. Experiencing an illusion of a tree might lead me to claim that I witnessed a tree when I did not. That’s one problem. Another is, the tree may no longer exist. It might have died, been burned in a fire or bulldozed. Still another problem is: if I was the only witness, then billions of other people will only have heard or misheard the story of my witnessing a tree second hand or third or fourth or fifth etc. Maybe I witnessed a long island tea and it got misreported.

We meet various problems of this sort in the Turing test. One may only be witnessing artificial intelligence and believe one is witnessing real intelligence, real sentience, real thought processes and real consciousness.

The original concern of this thread was to draw attention to the dangers of developing machines of superintelligence. Stephen Hawking raises the specter sentient machines that develop and work toward their own goals while crushing humanity beneath their treads.

I have to say this sounds more like science fiction to me, than reality. This is not to say that as we lay off more and more workers, our machines won’t soon prove hazardous to our economic health; or as some of us become more and more interconnected, those left out will be not left behind...educationally, culturally and socially. Here I think I’m in agreement is Stavros.

Our only disagreements are 1) whether or not we ourselves are machine who are conscious by virtue of the physical integration of their parts and their complex physical interactions with an enormous, chaotic and hugely varied world ; and 2) whether or not machines crafted by humans may someday also be sentient in the same way.

Stavros takes the position that we are more than the sum of our parts and the extra bit that we are is the soul. I am satisfied that post #73 demonstrates that argument is fallacious. However, just because the argument is fallacious, it doesn’t follow that its conclusion if false. But I’m left without any good reasons for adopting the soul-hypothesis.

Buttslinger brings up the question of the existence of a creator god. I’m not sure if such a creature has much to do with the issue being discussed other than by analogy. I presume that if there’s an almighty, all knowing, all powerful, everywhere present divinity who uploads and installs souls into human babies as they are born, or conceived or whenever, then that deity can just as easily upload and install a soul into just about anything he wants: a tree, a frog, a fortress or a droid that speaks in beeps and whistles. So the soul-hypothesis doesn’t really prevent machines from being sentient. One needs an additional hypothesis: no person (deity, human or of some other race) will ever give a machine a soul and no machine will ever acquire one by any means. Of course such a hypothesis just begs the question we’ve been trying for several pages to answer.

I see my main difference with some posters here as a matter of perspective. It is not, however the perspective of Science as opposed to Religion. Few of the views I presented in this thread are scientific, they are rather naturalist. The opposing perspective is that human autonomy and sentience have an unnatural origin. One proposal is that our essential selves are not of this world; they are souls. Souls are something unnatural. They transcend nature. They do not consist of any of the things that constitute the reality of nature. They are not made from molecules, or elementary particles, or strings, or any other kind of natural matter or energy. They are bestowed upon human bodies by creatures from Divine and Demonic worlds who sometimes transgress the borders of the natural world and stay its laws to effect their own will.

Stavros
07-18-2015, 12:09 PM
[QUOTE=trish;1618827]
We meet various problems of this sort in the Turing test. One may only be witnessing artificial intelligence and believe one is witnessing real intelligence, real sentience, real thought processes and real consciousness.
The original concern of this thread was to draw attention to the dangers of developing machines of superintelligence. Stephen Hawking raises the specter sentient machines that develop and work toward their own goals while crushing humanity beneath their treads.

-From today's Telegraph (in the link there is also a video):
Robot passes self-awareness test

A simple experiment has shown that robots have greater self awareness and deductive powers than previously thought
Robots might be even cleverer than we realised.

A simple experiment carried out in New York has demonstrated that robots not only have greater powers of deduction than previously acknowledged but are also aware of their own limitations.
Scientists at Rensselaer Polytechnic Institute in New York (http://www.telegraph.co.uk/news/health/news/9533249/Using-iPads-before-bed-can-lead-to-a-poor-nights-sleep.html) built three robots who were all put through what is known as the “three wise men” test.
It sounds like a child’s fable.
The robots played the role of the three wisest men in the kingdom who were summoned by the monarch to his court.
In the tale, the king placed a blue or white hat on the wise men’s heads - without telling them which colour they have been given.
The first sage to deduce the colour of his hat was appointed as the king’s adviser.
In the New York experiment, the test was tweaked by Selmer Bringsjord. (http://www.telegraph.co.uk/culture/3647766/Seven-a-number-you-can-count-on.html)
Two of the robots were programmed to be unable to talk, the third was not.
All three were then asked to say who who had the power of speech.
The robots all tried to say “I don’t know”.
But the one who could hear its own voice realised it had not been silenced and added “Sorry, I know now.”
The “winning” robot jumped two important logical hurdles.
Firstly, it understood the question. Secondly and crucially, it heard its own voice and finally used this information to give the correct answer.
The significance of the experiment is that it shows that robots can be developed to have some human qualities such as self awareness and deduction.
Mr Bringsjord will present his findings at the RO-MAN conference in Japan which runs from August 31 to September 4.
In Japan a new hotel planned for Nagasaki is planning to use robots to greet guests (http://www.telegraph.co.uk/travel/destinations/asia/japan/11387330/Robots-to-serve-guests-in-Japanese-hotel.html), make them coffee and even carry luggage to their rooms.
http://www.telegraph.co.uk/news/worldnews/northamerica/usa/11748084/Robot-passes-self-awareness-test.html

I have to say this sounds more like science fiction to me, than reality. This is not to say that as we lay off more and more workers, our machines won’t soon prove hazardous to our economic health; or as some of us become more and more interconnected, those left out will be not left behind...educationally, culturally and socially. Here I think I’m in agreement is Stavros.

Our only disagreements are 1) whether or not we ourselves are machine who are conscious by virtue of the physical integration of their parts and their complex physical interactions with an enormous, chaotic and hugely varied world ; and 2) whether or not machines crafted by humans may someday also be sentient in the same way.

Stavros takes the position that we are more than the sum of our parts and the extra bit that we are is the soul. I am satisfied that post #73 demonstrates that argument is fallacious. However, just because the argument is fallacious, it doesn’t follow that its conclusion if false. But I’m left without any good reasons for adopting the soul-hypothesis.

-I am not sure that I suggested the soul is 'an extra bit' of humans, rather that is something we have which science has not really been able to explain or because it says it does not exist; just as it has a problem defining what consciousness is, because the sum of being human is clearly not just the body and to say, whether or not we ourselves are machines who are conscious by virtue of the physical integration of their parts and their complex physical interactions with an enormous, chaotic and hugely varied world is to me a weak argument that merely accepts consciousness as what it is. Science doesn't look at the heart in that way.
It has been argued that ants are completely unaware that humans exist, yet they must have consciousness that enables them to interact with each other to provide food and shelter. How can we know that our own conscious awareness of ourselves does not exclude other living beings? Science tells us this is not the case, yet still cannot explain consciousness without reducing it to the equivalent of electricity or whatever it is that powers this computer.

I see my main difference with some posters here as a matter of perspective. It is not, however the perspective of Science as opposed to Religion. Few of the views I presented in this thread are scientific, they are rather naturalist. The opposing perspective is that human autonomy and sentience have an unnatural origin. One proposal is that our essential selves are not of this world; they are souls. Souls are something unnatural. They transcend nature. They do not consist of any of the things that constitute the reality of nature. They are not made from molecules, or elementary particles, or strings, or any other kind of natural matter or energy. They are bestowed upon human bodies by creatures from Divine and Demonic worlds who sometimes transgress the borders of the natural world and stay its laws to effect their own will.
-I don't believe I have stated that I believe humans are God's creation and that the soul is the proof of this claim, which leaves us with the problem of whether or not we have minds, souls, consciousness and what these might be other than pulses generated by sunshine and water.
Is the irony of all this not in the image of a world of one set of machines -humans, being taken over by another, AI? At one point does anyone make a qualitative assessment of the two? And on a measurable scale of values, is one species superior to the other? As the creators of computers have we become Gods destined to be overthrown by our own creation?

trish
07-18-2015, 07:40 PM
Thanks, Stavros, for the link to theTelegraph article. I have to read more carefully, but offhand I must say I’m not convinced the experiment requires anything more of the three programs than routine logic solving capabilities.


-I am not sure that I suggested the soul is 'an extra bit' of humans, rather that is something we have which science has not really been able to explain or because it says it does not exist; just as it has a problem defining what consciousness is, because the sum of being human is clearly not just the body...

I apologize then for misrepresenting your position. The claim “We are more than the sum of our parts,” requires us to interpret “sum” and “more”. As I’ve pointed out, a car engine is more than the sum of its parts if we interpret “sum” as a mere bag of disassembled parts lying in a bin and “more” as the difference between what the assembled engine can do as opposed to the bin of parts. In this example the “extra bit” is the natural interaction of the configured parts in accordance to the laws of thermodynamics. The whole, the parts and the “extra bit” are all encompassed by the natural world. If this is how you employ the words “sum” and “more” we have no disagreement_just a semantic misunderstanding. Somehow, I suspect this is not the case.


It has been argued that ants are completely unaware that humans exist, yet they must have consciousness that enables them to interact with each other to provide food and shelter. How can we know that our own conscious awareness of ourselves does not exclude other living beings? Science tells us this is not the case, yet still cannot explain consciousness without reducing it to the equivalent of electricity or whatever it is that powers this computer.

As far as I know science hasn’t explained or reduced consciousness to anyone’s satisfaction. Some philosophers (like Dennett and the Churchlands) make extravagant claims for science in this regard, but none what what they do can be called science. I do not have much hope that we will explain consciousness anytime soon. I only claim (from my preferred perspective) that it is a natural phenomena. There are a lot of phenomena that remain unexplained. The source of dark energy (if it really exists), why the Sun is 99.7% of the Solar System by mass but only 3.5% by angular momentum, the escape of information from behind the event horizon of black holes, the origin of life on Earth etc. My inclination is to assume these are all natural phenomena. I may be wrong in some instances, but whether they are or not is clearly independent of whether or not we humans have an explanation for these phenomena.


-I don't believe I have stated that I believe humans are God's creation and that the soul is the proof of this claim, which leaves us with the problem of whether or not we have minds, souls, consciousness and what these might be other than pulses generated by sunshine and water.


Again I apologize for misstating your view.


Is the irony of all this not in the image of a world of one set of machines -humans, being taken over by another, AI? At one point does anyone make a qualitative assessment of the two? And on a measurable scale of values, is one species superior to the other? As the creators of computers have we become Gods destined to be overthrown by our own creation?

Not unlike a king who usurped by his son, or the businessman who is so consumed by the company he created he forgets to live. Kids! Why can’t they be like we were, perfect in every way?

sukumvit boy
07-19-2015, 02:21 AM
:deadhorseA refreshing step back and look at the future of robotics an AI , featuring David Pogue of Google , Bill Gates and Elon Musk....
http://www.youtube.com/watch?v=GweFbcPlJXg

http://www.youtube.com/watch?v=vHzJ_AJ34uQ

http://www.youtube.com/watch?v=eBFK2Wscp4E

Stavros
07-19-2015, 03:01 PM
In response to the above posts, might I suggest that there is an obsession with humanoid robots but that AI might best be taken into areas which do not require the construction of a pretend human? We already have the internet, and computing married to geography has produced the Geographical Information Systems which we hope will better predict earthquakes and tsunami (separately or together), climate patters, maybe even volcanic eruptions, and this seems to me to be more useful than spending millions to get Charlie to make me a cup of coffee and me worrying that he might have poisoned it.

Transport is one area where AI can usefully replace cars, for example. I don't see why commuters into the major cities, be it in the USA or anywhere else, should drive when an integrated and wholly automated railway can do it. Moreover, this service could easily run 24 hours a day 365 days a year because 'robots' or AI don't need sleep and don't need rights at work either. Although this is another example of technology replacing humans, I think we under-estimate the power of AI to do things that are positive, as Sukumvitboy's youtube links suggest. After all, real life is different from Hollywood.

buttslinger
07-19-2015, 09:28 PM
It could be that Bill Gates already has an android that looks unearthly beautiful, fucks like a wildcat and cleans the entire house, but he'll be remembered for turning the world on to "Windows"
It is possible THE ONE PERCENT already control the entire world, I mean, how much do they need to own before people get suspicious that the game is rigged, 65%? 80%??????
Maybe computers have reasons that 50% is the number where the public at large is complacent enough not to get pitchforks and torches and attack their summer homes....
First and foremost, there is a computer called HUMAN NATURE that always seems to win out, so far at least. Sometimes human nature puts the individual before the whole of humanity.
Science Fiction writers of the 1800s would have been laughed at if they predicted that we would have the power to turn the world into a radioactive dust storm for the next ten thousand years. Maybe computers should kill everyone at age thirty. Death Panels. NO FEAR!!!

buttslinger
07-20-2015, 07:07 PM
My niece had a job where she was kinda on the ground floor of Obamacare, taking advantage of the obvious benefits of a nationwide base of doctors and patients, trying to eliminate the clusterfuck of American Health Care. The old doctors with illegible handwriting hated it. The younger doogie howsers who grew up on a computer got it.

The obvious advantage to a nationwide SUPERINTELLIGENT computer is that it could take your eyeprint and compute which job you're most likely to excel at, which Hotel in Vegas would suit you best, which car insurance is best.........all the mysteries of the universe enlightened. Hopefully a supercomputer would eliminate all the corruption and waste from A to Z, and robots would give everyone a 24 hour work week. Not nationwide unemployment. Rather than a Computer that dominated everyone's life, everyone would have a personal Secretary who is plugged into everything.

If you could connect your personal computer into the personal computer of a car and immediately compute fair market value, this might actually spur car makers and owners to get with the American Program of delivering what the people want. Plug your phone into the local supermarket and pick up your weeks worth of healthy food delivered at the drive-thru. Chart the map that puts every American onto the road to the American dream. Every personal computer would have 99 Supreme Court Judges built in. To determine what's fair. Then the Japanese could fine tune it to be more productive.

Integrating 300 million plus Americans into a computerized system would mean there would be lots of bugs to get out, but ultimately it would result in less shit work than is done now. You might have to turn off the entire system on Sundays so people actually look at each other again. I doubt it would even get considered until drive time to work is four hours and the price of gas goes up to $20/gallon. Squeeky wheel gets the grease.

People might actually get to a place where they don't have to embrace all their problems. Ready?

trish
08-25-2015, 04:07 PM
http://nyti.ms/1TN7ipe

sukumvit boy
09-30-2015, 01:55 AM
Nice "Times" article Trish...strange to think that war could be made less horrific by AI but I have seen the terrible consequences of land mines and unexploded cluster bombs in places like Cambodia where it seems like every 10th person is missing some part of an extremity. Those were truly 'stupid' weapons as well as the primitive mindset that deployed them.

sukumvit boy
09-30-2015, 02:02 AM
Love these little classic SiFI pics of yours buttslinger , like this Fritz Lang Metropolis.
and I Robot a while back.

buttslinger
09-30-2015, 06:58 AM
Love these little classic SiFI pics

Sci-fi is commonplace now, kind of takes the imagination out of it.

nitron
10-07-2015, 10:17 PM
No, but when are we ever ready for anything that's happened to us ,in the last few millenniums....personally I feel that massive unemployment will hit our civilization before AI has a chance to be fully realized and that might slow down the rush for it. As far as will it be good or bad ...I tend to lean on the side of caution , this will be a species with greater powers than humans, if it's allowed full sentience. It won't be controlled ,like humans are, by instincts and unconscious processes. (It might , if it is sentient, it might be able to go around any behavioral programming we in-bed it with ) It will be a GOD....Are we ready?

hippifried
10-08-2015, 06:42 AM
I'll start thinking about getting ready for this bullshit when Siri starts to "get it".

trish
10-11-2015, 01:25 AM
Stephen Hawking says machines aren't the problem, inequality is....
http://www.marketwatch.com/story/5-questions-with-stephen-hawking-technology-is-driving-ever-increasing-inequality-2015-10-08

hippifried
10-11-2015, 07:06 AM
Stephen Hawking says machines aren't the problem, inequality is....
http://www.marketwatch.com/story/5-questions-with-stephen-hawking-technology-is-driving-ever-increasing-inequality-2015-10-08

He makes some very good points. But inequality has nothing to do with machines. There's always been some asshole or group of assholes who think they should be in charge, & everybody else should be subserviant. Today, technology notwithstanding, is no different.

I agree with his take (or lack thereof) on women.

broncofan
10-11-2015, 11:13 AM
Stephen Hawking says machines aren't the problem, inequality is....
http://www.marketwatch.com/story/5-questions-with-stephen-hawking-technology-is-driving-ever-increasing-inequality-2015-10-08
Question: What is your favorite song ever written?
Hawking: “Have I Told You Lately” by Rod Stewart
:what

buttslinger
10-11-2015, 11:19 AM
....... There's always been some asshole or group of assholes who think they should be in charge, & everybody else should be subserviant. Today, technology notwithstanding, is no different.

I agree with his take (or lack thereof) on women.

I was just reading about Bill Gates and his.... "alleged" .....shadowy deal with THE MAN to make Windows 10 an NSA dream. I never have trusted that guy.

Oddly there is only one thing I agree with black men on: All women are either bitches or hoes.........
(not you, Mom)
(not you, female analyst reading this at some government spy agency)

trish
10-11-2015, 04:07 PM
Love how with just one sentence you manage to slur both blacks and women. Bravo.

buttslinger
10-11-2015, 05:29 PM
Love how with just one sentence you manage to slur both blacks and women. Bravo.

https://www.youtube.com/watch?v=AxDo7P0wUn0

Caleigh
10-11-2015, 06:21 PM
how did this thread devolve into misogynist advertising?

Caleigh
10-11-2015, 06:25 PM
https://www.youtube.com/watch?v=7Pq-S557XQU

Fyusian
10-11-2015, 06:42 PM
The Machine Stops is quite accurately predicting our down-fall. Written in 1909, it predicted instant messaging and internet technologies and how humanity has come to rely on them so much forgetting basic social interactions. Now obviously we aren't at that point and probably will never end up like that but we're not exactly in a good place when we see social network addiction and how people apparently now need therapy just to cut down on their use of social networking. I've heard of people having panic attacks when their phone battery runs out because they can't be without it for a couple of minutes. It's madness.

883168

How many people rely on being given answers from Google rather than understanding the answers they're reading? Einstein was right, technology is surpassing human interaction and breeding a generation of idiots in an era where there is little excuse for people not to be educated anymore.

So how can we be ready for super advanced artificial intelligence when so many people can't even handle social networking or the internet?

But one thing at a time I say. We'll have augmented humans before anything else and perhaps that would be better. Create the best of both worlds with machine and man. That's the real future and let's be honest, if we did end up creating robots that more advanced intelligence than ours, their logical next step would be to replace us. So create augmented humans first to ensure robots can never rebel against their masters! lol

Of course cybernetic humans themselves raise many topics to discuss.

hippifried
10-12-2015, 04:05 AM
So far, I'm not impressed with any of the AI I've seen. A month or so ago, i wanted to buy a new case for my 6+. I called the local # to the Apple store at the Biltmore in Phoenix to see if the one I wanted, in the color I wanted was in stock before I made the trip (3 miles). What I got was a CG voice that said it could understand whatever questions I might have. After listening to several versions of "I don't know what you want", I asked to be transferred to a human sales rep. So I get transferred to someone at a store in Denver who politely tells me that they have what I want. When I explained that I'm in Phx, he said "oh" & gave me back to the bot. Now I'm getting irritated, so all I say is "I want to speak to a human being". It took nearly 10 minutes of repeating the words "human being" over & over before I got transferred to some guy in CA. For all I know, it could've been some corporate mucky muck in Cupertino. I could almost feel his eyes rolling as I told my story again. He then transferred me directly to a real phone at the Biltmore store in Phx, where a young lady was able to take care of me in less than 2 minutes. That fiasco was more than a half hour wasted. If not for my pacemaker, I might have blown a gasket.

buttslinger
10-12-2015, 05:57 AM
"I want to speak to a human being".

SIDEBAR ALERT!!!!

Hey Hippifried, did you just move to Phoenix?
Is that dry desert air everything they say it is?

hippifried
10-12-2015, 08:41 AM
SIDEBAR ALERT!!!!

Hey Hippifried, did you just move to Phoenix?
Is that dry desert air everything they say it is?
Except for an 8 year stint (till 3 years ago) in Imperial Valley CA, I've been in Phx since 1967. In AZ since 1960. It's been great the last week or so. Highs in the mid to high 90s & mid 70s at night, but it's supposed to get back over 100 for the rest of the week.
So what do "they" say about dry desert air? Probably bullshit. Like anywhere, there's stuff that can bite you. Some folks find it beneficial overall. Depends what you're allergic to. Phx is a big city. The vally used to be irrigated farms & orchards. Now it's concrete, asphalt, & housing tracts, with a population of over 5 million. You have to drive a ways to find the desert. In defense of my home though: We have great Mexican food, almost no mosquitoes, & you can dress in shorts & t-shirts 10 months out of the year. But then again, there's an 8% sales tax. ...But it's a dry heat... Yeah, uh huh...

buttslinger
10-12-2015, 02:11 PM
.....there's an 8% sales tax. ...But it's a dry heat....

I live in No. Virginia, and I just read an article that puts you and me in the most desirable places not only to live, but to retire. That makes it expensive, but you get what you pay for. If I ditch my big house and yard, in my personal situation, it almost makes sense to liquidate everything I have and own a suitcase full of cash. And a plane ticket to Shangrai-La. My real estate tax is no joke either. But having all the creature comforts they have in congested overtaxed urban areas......I'm hooked. The humidity in the summer here makes two months of the year unbearable while you watch your lawn turn to a crunchy brown. Your area has the best humidity levels in the entire USA. You probably take it for granted.


ANYWAY>>>>>back to the trough, folks, thanks Hippifried, thanks, civilians.

Instead of AI, I'd like six buttslinger clones, made from my DNA, and my personal property. They could get jobs and cook and clean for me 24/7. What would I do with a computer that's self aware? What would it need me for?

hippifried
10-12-2015, 09:35 PM
Instead of AI, I'd like six buttslinger clones, made from my DNA, and my personal property. They could get jobs and cook and clean for me 24/7. What would I do with a computer that's self aware? What would it need me for?

Hope those clones can handle your geriatric home care. Wouldn't they have to be grown & trained first? I wonder if an aversion to cleaning up drool & changing diapers is genetic. Maybe a better idea would be to start now, cloning replacement parts. Or as needed if they can be grown quickly in place.

As for a self aware computer:
It'd probably just give itself a complex or a binary nervous breakdown. Then get so confused & depressed that it turns itself off.

buttslinger
10-13-2015, 01:02 AM
Wouldn't they have to be grown & trained first? I wonder if an aversion to cleaning up drool & changing diapers is genetic.

It's a shame my clones would be more trouble than they're worth. Even with the sweat shop money.

Considering that twenty years ago I was in awe of chess programs on my 486 computer, it's only logical that even if computers never become artificially intelligent, they really should be able to rule Wall St or coach football teams. While a generation of humans is twenty years and then starts over from scratch, each generation will have a smarter computer to play with. I wonder if a hundred years from now they'll look back at these days as Idyllic, moronic, or both.

Caleigh
10-13-2015, 08:28 PM
http://venturebeat.com/2015/01/02/robots-can-now-learn-to-cook-just-like-you-do-by-watching-youtube-videos/

Stavros
10-21-2015, 06:44 PM
I have never seen the film Back to the Future so I can't really comment on the various articles and programmes on tv and the radio which ask why some of the predictions in the film came true when some did not, but there is an interesting article in today's Telegraph about a man called 'John Titor' who claimed to have returned to the year from 2036 and who posted this message on an internet forum:

Greetings. I am a time traveller from the year 2036. I am on my way home after getting an IBM 5100 computer system from the year 1975. “My ‘time’ machine is a stationary mass, temporal displacement unit manufactured by General Electric. The unit is powered by two top-spin dual-positive singularities that produce a standard off-set Tipler sinusoid.
“I will be happy to post pictures of the unit.”

-There is more in the link, and while he seems to be a fantasy (one of the commentators to the article claims it is from a science fiction book called Alas, Babylon (1959) his precise need for an IBM 5100 is well, weird...

http://www.telegraph.co.uk/news/science/11945420/Who-was-John-Titor-the-time-traveller-who-came-from-2036-to-warn-us-of-a-nuclear-war.html

Caleigh
10-21-2015, 07:51 PM
oh come on.... we can't even get electron spin-pair communication working yet, though we might have it in 5 or 10 years, time travel? hoax no doubt. all someone needs to do is post some text to a site and suddenly we are believing in time travel?

my video link posted above about the cooking robot that learns from watching videos.. that is now. not some hoax.

trish
10-21-2015, 08:01 PM
The "Tipler" in the "standard off-set Tipler sinusoid" is Frank Tipler. He was a promising physicist/cosmologist, who wrote a number of interesting papers (some of them on spacetimes that support time-travel), until he went off the deep end. Now he's the head guru of the Omega Point Theology that our mutual friend and Hungangel (Jamie Michelle) advocates.

Stavros
11-07-2015, 07:34 PM
An interesting article in The Guardian today on AI and jobs, can make grim reading for some. Contains some fascinating stats, like these -

The 'robot revolution' "promises robot carers for an ageing population it also forecasts huge numbers of jobs being wiped out: up to 35% of all workers in the UK and 47% of those in the US, including white-collar jobs, seeing their livelihoods taken away by machines."

“In 1900, 40% of the US labour force worked in agriculture. By 1960, the figure was a few per cent. And yet people had jobs; the nature of the jobs had changed. “But then again, there were 21 million horses in the US in 1900. By 1960, there were just three million. The difference was that humans have cognitive skills – we could learn to do new things. But that might not always be the case as machines get smarter and smarter.”

“The fastest-growing occupations in the past five years are all related to services...The two biggest are Zumba instructor and personal trainer.”

-Hang on, Zumba Zumba, isn't that...?

But an interesting article, and it is here
http://www.theguardian.com/business/2015/nov/07/artificial-intelligence-homo-sapiens-split-handful-gods

buttslinger
11-07-2015, 07:52 PM
They say the Caveman worked about 30 hours a week, hunting, fishing, gathering, sweeping out the cave, the rest of the time he spent socializing and hanging out with the family.
You would think now that they have factories to stamp out plastic homes and raise pigs and cows that we could return to a 30 hour workweek, shep-hearding the milking machines and monitoring the formica machines.
Instead we've got Wall Street Tycoons cooking up "busy-work jobs so they can scoop up half the wages. Power to the Robots!

Laphroaig
11-07-2015, 08:48 PM
An interesting article in The Guardian today on AI and jobs, can make grim reading for some.


But an interesting article, and it is here
http://www.theguardian.com/business/2015/nov/07/artificial-intelligence-homo-sapiens-split-handful-gods

There have been a few similar articles on the BBC recently, including a Panorama program (which I haven't seen).

http://www.bbc.co.uk/news/technology-34231931

http://www.bbc.co.uk/news/technology-33327659

http://www.bbc.co.uk/programmes/b06cn1wv

Also, find out here, how likely it is that your job will be replaced by a robot.

http://www.bbc.co.uk/news/technology-34066941

sukumvit boy
02-04-2016, 05:09 AM
Werner Herzog's new documentary ."Lo and Behold: Reveries of the Connected World ".
Another jewel from Herzog about the past , present and future of the internet and living in a connected world . He laments ;"things are going too fast " for most people to keep up.

http://www.youtube.com/watch?v=7Pv8Qj0Vkbo

martin48
02-04-2016, 02:04 PM
Do not start this whole thing going again! PLEASE


The "Tipler" in the "standard off-set Tipler sinusoid" is Frank Tipler. He was a promising physicist/cosmologist, who wrote a number of interesting papers (some of them on spacetimes that support time-travel), until he went off the deep end. Now he's the head guru of the Omega Point Theology that our mutual friend and Hungangel (Jamie Michelle) advocates.

sukumvit boy
02-05-2016, 06:12 AM
LOL , WTF ? 912347 You talkin to me ?
This is right on topic , and back to the original question posed in this thread .

broncofan
02-05-2016, 08:55 AM
He's talking to Trish, joking about all the incoherent Tipler theories we might hear from Jamie Michelle if Jamie decides she has left something unsaid about the universe and Jesus and forty million links of incoherent babble.

martin48
02-05-2016, 01:28 PM
Broncofan - You are right. I would lose the will to live if it started up again.

On Super AI - I think there is a lot of hype about at present. There has not been a great breakthrough in AI. 'Deep learning' is a natural extension of what has gone on before. But big companies are pushing it and that's the danger.

trish
02-05-2016, 04:55 PM
How about a nice game of GO?

http://qz.com/603313/googles-ai-just-cracked-the-game-that-supposedly-no-computer-could-beat/

martin48
02-05-2016, 06:23 PM
Two computers playing chess.

First one goes "pawn to king 4"

The second one resigns.


It is an impressive feat but is intelligence only about acting out a game with simple fixed and agreed rules - albeit billions of combinations?

trish
02-05-2016, 08:19 PM
Two computers playing chess.

First one goes "pawn to king 4"

The second one resigns.


It is an impressive feat but is intelligence only about acting out a game with simple fixed and agreed rules - albeit billions of combinations?Only if the evolution of the universal wave function is unitary and the cosmos is utterly deterministic. In that case we're all hopelessly entangled.

On a slightly more serious note, the worry isn't that computers might become intelligent, simply that, even though programmed, their interactions with a chaotic world might be unpredictable and dangerous. One might argue that the same critique applies to humans; but at least conscious agents have a stake in the final output whereas an unconscious machine running a sequence ill-conceived commands does not.

Is this an imminent danger? Stephen Hawking apparently thinks so. I do not currently share his concern. Besides, if SkyNet takes over the world we can always go back in time through an artificially constructed wormhole and kill it's prototypes.

martin48
02-09-2016, 07:22 PM
We don't have a great history in predicting the end of world. But I suppose we have only got to be right once.

nitron
08-10-2016, 07:55 AM
Play this with the sound off.......https://www.youtube.com/watch?v=tDjfEbwu3xA ,
""Fiery the angels rose, and as they rose deep thunder roll'd / Around their shores: indignant burning with the fires of Orc."

nitron
08-10-2016, 07:56 AM
and this play simultaneously as the soundtrack.....https://www.youtube.com/watch?v=gDBa1jgwR7k.....
"Fiery the angels fell; deep thunder rolled around their shores; burning with the fires of Orc."

Stavros
12-21-2016, 12:09 AM
I am copying in the whole of the article below from the Telegraph, mainly because a lot of their articles are hidden behind a paywall so some people not be able to read it. Some might delight in the possibility of having an android Venus Lux or Mia Isabella all to themselves (pick your favourite), and that would be a step up from Scarlet Johanssen from what I can see. But the article raises the curious question -should the android have rights? For example, the right not to be abused physically, emotionally or through some interference with her circuit boards? Will the android partner have a memory beyond the control of her (his?) owner that can be used in legal actions?
What starts out looking like a sexual dream situation, could end up being a nightmare...yet even I am enough of a pervert to want to at least 'test drive' some models....

Why female sex robots are more dangerous than you think http://www.telegraph.co.uk/content/dam/tv/2016/10/12/robots-large_trans_NvBQzQNjv4BqqVzuuqpFlyLIwiB6NTmJwfSVWe Z_vEN7c6bHu2jJnT8.jpg
Robots could replace sex between couples, according to experts

Tobi Jackson Gee 20 December 2016 • 11:31am


I was going to start this article about robots with a not-so-clever reference to Fritz Lang’s Metropolis. But then I spoke to Blay Whitby, a philosopher concerned with the social impact of emerging technologies and the trivialisation of robots in the media - and I decided otherwise.

Because when it comes to robots, it’s simply no use discussing them through the lens of our favourite film or science fiction book. Cliched as it may be, the future is here; we can and should talk about reality. Within a matter of decades we’ve become entirely reliant on technology and robots are increasingly part of our everyday lives.
The latest chapter comes courtesy of Dr Trudy Barber, a pioneer in the impact of technology on sexual intercourse. Speaking at the International Congress of Love and Sex with Robotics, Dr Barber said people’s growing immersion in technology means it's only a matter of time before it takes a mainstream role in sex.
Put simply: sex between couples will increasingly be saved for special occasions as robots step in to satisfy our everyday needs. Dr Barber predicted the use of artificial intelligence (http://www.telegraph.co.uk/technology/2016/10/19/stephen-hawking-says-artificial-intelligence-could-be-humanitys/) (AI) devices in the bedroom will be socially normal within 25 years and that the machines would enable people to appreciate 'the real thing'.
"I think what will happen is that they will make real-time relationships more valuable and exciting", she added.

Devices such as Rocky or Roxxxy True Companion can currently be bought for around £7,000, but advances in the field are predicted to make sex robots increasingly lifelike and affordable.
Indeed, in April this year, a man figured out a way to make a robot in his own home that resembled a woman they don't know.
Ricky Ma, 42, a Hong Kong-based man with no formal training in robots, spent £35,000 to create a robotic woman who looks exactly like Scarlett Johannson. And there’s absolutely nothing she can do about it.
Unlike the vivacious and intelligent actress, his robotic counterpart was programmed to respond to questions like ‘you are very beautiful’ and ‘you’re so cute’ with little more than a coquettish smile and a wink.
It's an utterly disappointing reflection of the way women are portrayed in society - Ma’s clever three dimensional creation is about as one dimensional as you can get.
http://www.telegraph.co.uk/content/dam/women/2016/04/08/scarlett-johansson-robot-RTSD3RM-1024x683-large_trans_NvBQzQNjv4BqdODRziddS8JXpVz-XfUVR2LvJF5WfpqnBZShRL_tOZw.jpg
The 'Mark 1' robot that looks like Scarlett Johansson Credit: Bobby Yip / Reuters

Is all this cause for concern? Of course. Because right now more money is being spent on making these things than thinking about the ethical and societal ramifications. We already know porn provides a terrifying reflection on how society views women, which can manifest itself in real life.

But what happens when machines start contributing to the objectification of women too?
There's also a real worry that people will abuse robots assigned human traits - whether it be in a sexual or physical way. Whitby thinks it's a legitimate concern: “Will people mistreat robots? Oh yes, I’m sure. The reason I’m sure is because they already do. The way people first meet artificial intelligence is in a character in a video game that they’re shooting at.”
As we are yet to truly understand the effect that playing violent video games has on young minds, it will be years before we even begin to comprehend the knock-on effects that the mistreatment of human-like robots has on our behaviour towards each other.
Dr Kathleen Richardson, a Senior Research Fellow in the Ethics of Robotics at the Centre for Computing and Social Responsibility, has done extensive research into this area - especially in regards to women. She says: “A machine, like the portrayal of women in pornography, prostitution and the media are entirely objects for male gratification. But women aren't like what males see in pornography or in prostitution or in popular media.
“How would you feel about your ex boyfriend getting a robot that looked exactly like you, just in order to beat it up every night?”
Blay Whitby
“In these areas women are coerced or told how to be have act or behave with a threat of money or violence. In real life, women really have their own thoughts and feelings and preferences and desires. It seems logical that if this extreme control can't be experienced by men with real women, the only next step is to create artificial objects.”
The people creating these robots are also partly to blame. A 2014 Nesta study titled ‘Our Work Here Is Done: Visions of a Robot Economy’ (http://www.nesta.org.uk/sites/default/files/our_work_here_is_done_robot_economy.pdf)examined how gender is assigned to machines in the workplace. Researchers found that ‘male' robots are thought to be better at repairing technical devices while 'female' robots are thought to be more suited to domestic and caring services.
In other words: people with gendered ideas make robots that conform to gender norms, which then perpetuates existing stereotypes.
As long as these norms go unchallenged, and robots are designed to fulfil perceived gender roles (has anyone yet talked about a male 'sex robot'?) this vicious cycle will continue.
But it doesn't have to be this way. What if the people programming and designing these robots didn’t have such stereotypical views? What if they used this amazing new platform to defy gender stereotypes, and rather than serving as a poor reflection on society, instead inspired us to look at ourselves in new ways?
It’s a nice thought. But as long as manufacturers stand to make a profit from robotics, and see these types of characterisations as a means to creating more humanised, relatable machines that sell better, not much is going to change.
http://www.telegraph.co.uk/content/dam/men/2016/01/08/ROBOT-large_trans_NvBQzQNjv4BqtGQB12KHxxQCrwnTZkX0nwgWqw m85JEWpGVhFb46TTg.jpg
Inventor Douglas Hines with his True Companion sex robot, Roxxy Credit: ROBYN BECK

At the 2016 AAAS (American Association for the Advancement of Science - the world's largest scientific society) annual meeting, Yale ethicist Wendell Wallach (http://www.wendellwallach.com/) spoke of his concerns about AI. He said: “There’s a need for more concerted action to keep technology a good servant - and not let it become a dangerous monster.”

While codes exist to guide the creation of machines, the lack of law in place means that time and effort is being ploughed into manufacturing and programming, and no one is thinking twice about the effects this will have on living and breathing humans. Being cautious isn’t sexy in the business of technology - and it rarely comes with financial rewards.
Whitby urged us to act now, before it’s too late. “We need to have these discussions instead of waking up one day when robot companions are normal and question whether it was a good idea or not," he says.
And as this kind of technology is rolled out around the world, he had a stark warning about where the democratisation of technology is taking us: “How would you feel about your ex boyfriend getting a robot that looked exactly like you, just in order to beat it up every night?”
It’s a shocking idea, isn’t it? On the one hand, it’s a machine - it isn’t you. But then, it is you, because it stands for you, and who you are.
Whitby added: “I mean, it might be alright, it might mean he can be calmer and more normal with you - think about Aristotle’s theory of catharsis. But we really haven’t discussed this as a society. We’re drifting towards it and the technology is very close to being available, but we just aren’t talking about it.”
It’s time we started having these conversations, before those oft quoted science fiction dystopias become a nightmarish reality.

http://www.telegraph.co.uk/women/life/female-robots-why-this-scarlett-johansson-bot-is-more-dangerous/

nitron
12-24-2016, 05:46 AM
"Why female sex robots are more dangerous than you think..."
SJW, , Feminist dribble . All I hear is, (Steve Moxon's), "The Woman Racket" (http://www.imprint.co.uk/books/TWR.html)

will be in trouble.
Sex with inanimate objects that's all were talking about.

trish
12-24-2016, 07:05 AM
Nothing wrong with social justice.

nitron
01-28-2017, 03:13 AM
I'm hoping that each group will be divided up into there own sections, and be allowed to pursue there own unique foolishness, without bothering the others. Similar to the way Mennonites and Amish are separated from the rest of the Society.
That's what I'm looking forwards to AI, soft or hard.We all have our biases and histories, and only AI acting between us ,can we move forward, if at all.
Evolution has forced this on us. For Good /Bad.

nitron
01-28-2017, 03:16 AM
I'm hoping that each group will be divided up into there own sections, and be allowed to pursue there own unique foolishness, without bothering the others. Similar to the way Mennonites and Amish are separated from the rest of the Society.
That's what I'm looking forwards to AI, soft or hard.We all have our biases and histories, and only AI acting between us ,can we move forward, if at all.

I'm pretty sure we can't Govern our self's , without causing misery and destruction in the long term.Evolution has forced this on us. For Good /Bad.

nitron
01-28-2017, 12:48 PM
"Nothing wrong with social justice."

As long as it affects men. Hey TRISH, fuck men's right's .
It's ok to limit there choices, hey TRISH.
SOCIAL INJUSTICE, trish , did I get you right.EH as long as it's fucking MEN over!

trish
01-28-2017, 05:41 PM
Of course justice is for (and should therefore affect) all people regardless of their gender. But are we talking justice or rights? If we're talking rights, it's not a zero-sum game as you seem to think. Recognizing the rights of others doesn't entail losing your own. I'm not for limiting anyone's choices if those choices constitute a real liberty and not just a mere advantage based on prejudice.

Stavros
05-01-2017, 09:36 AM
Apparently smart phones are in their last phase of development as 'neural laces' are developed which will link our brains to computing technology. No need to carry around all those gadgets, when all you need do is blink and get connected. And of course 'they' can connect directly to you...thus Elon Musk has created a firm called Neuralink -

with a goal of building computers into our brains by way of "neural lace," a very early-stage technology that lays on your brain and bridges it to a computer. It's the next step beyond even that blending of the digital and physical worlds, as human and machine become one.

And with the current White House loudspeaker blaming the Constitution for 100 days of failure, we see the future (without that damn Constitution), and the future is Orange...(as the ad in the UK used to say)
https://uk.yahoo.com/finance/news/smartphone-eventually-going-die-then-145451788.html

nitron
05-20-2017, 03:04 AM
I think it will be better if we could understand how the brain works , thereby , how the mind works, first. Then , we should merge with the devices, neurally. At least we could trust that more than a totally autonomous intelligence. Think of it as using Nature as model to build on rather than unknown entity. It seems to me the more prudent approach. But , Stavro , after reading your last entry , it does feel way to invasive for my liking. Nevertheless , it would be the more prudent approach considering the alternative .