It is currently Fri Oct 20, 2017 10:05 pm

All times are UTC - 6 hours [ DST ]




Post new topic Reply to topic  [ 537 posts ]  Go to page Previous  1 ... 17, 18, 19, 20, 21, 22, 23  Next
Author Message
PostPosted: Fri Aug 25, 2017 10:13 am 
Offline
* * * * *
User avatar

Joined: Sat Sep 08, 2007 2:53 pm
Posts: 8092
Location: PNW
Has thanked: 31 times
Been thanked: 146 times
First they're servants, then equals, then gods' servants, then gods?

I don't like this brave new world.
Most people will not be in mansions with a fleet of robot servants to attend their every whim.
I think we have a new disaster to prepare for...
I think we need a new acronym.

_________________
In my day, we didn't have virtual reality.
If a one-eyed razorback barbarian warrior was chasing you with an ax, you just had to hope you could outrun him.
-
Preps buy us time. Time to learn how and time to remember how. Time to figure out what is a want, what is a need.


Share on FacebookShare on TwitterShare on TumblrShare on Google+
Top
 Profile  
Reply with quote  
PostPosted: Fri Aug 25, 2017 12:07 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Sun Apr 05, 2009 11:58 pm
Posts: 3633
Has thanked: 1387 times
Been thanked: 447 times
ZombieGranny wrote:
First they're servants, then equals, then gods' servants, then gods?

I don't like this brave new world.
Most people will not be in mansions with a fleet of robot servants to attend their every whim.
I think we have a new disaster to prepare for...
I think we need a new acronym.



The forum could be called.............. ZOMBOT SQUAD :mrgreen:

The undead 'Bots lust for the CPU chips from all of your appliances!

Image

_________________
Most of my adventures are on my blog http://suntothenorth.blogspot.com/" onclick="window.open(this.href);return false;
My Introduction With Pictures: http://zombiehunters.org/forum/viewtopi ... 10&t=79019" onclick="window.open(this.href);return false;
Graduated with honors from kit porn university


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 30, 2017 7:07 am 
Offline
* * *
User avatar

Joined: Wed Mar 05, 2008 3:07 pm
Posts: 495
Location: North Carolina
Has thanked: 5 times
Been thanked: 33 times
Well, I just found a new resource that I think everyone who takes this topic of discussion seriously (which I think most all of us do, and I enjoy the ability to talk about serious topics in a lighthearted manner too - that is the beauty of ZS) ought to take a look at. And finally, there IS at least something you can do to help AI research:

First:


Follow-up link:
https://futureoflife.org/superintelligence-survey/

The results of the survey, and some of the concepts expounded upon in the site are good food for thought.

I think it is interesting that we (I mean AI safety researchers, and myself as a philosophical hobby) spend a lot of effort trying to ensure that a superintelligent AI's goals will align with our goals.

First, to qualify as superintelligence, it must process information and 'think' better than us in all possible domains. Then it is also quite possible to conceive that there are higher dimensions of morality, etc. which it may understand that we do not. Why would we want to (or could we even) restrain it from realizing those goals?

Secondly/thirdly/etc., if a superintelligence is able to think better than us in all domains, why not let it decide for itself what is ultimately the best use of its resources? It may be maximizing a goal which we don't understand, and we might call it rampant, but could that goal ultimately be better for us and/or the universe than our own goals? While we have a strong bias that our goals (humanity, collectively) are the best, they have often played out as repetitious, cycles of short-sighted gains trading destruction for our own species, not to mention those haplessly left in its wake. Maybe a superintelligence will naturally follow that path (maybe it is one way of maximizing certain goals) with or without our guidance, but could it also find a different way?

I've also recently wondered if there is a sort-of extra-terrestrial MAD angle on superintelligence. Certainly, amongst our species, whomever develops a superintelligent AI with malevolent intent could conquer and/or rule this planet, but I also wonder if we have a duty to develop a benevolent superintelligent AI to protect not only our planet, but perhaps the galaxy from other evil-aligned civilizations/superintelligences before they preemptively wipe us out?

Linking the above concepts: Given that many people want humanity to expand into the cosmos, and seeing how we handle natural resources, other life, and each other, do we actually have an obligation to create superintelligence to protect the universe from us? :roll:

_________________
Rahul Telang wrote:
If you don’t have a plan in place, you will find different ways to screw it up

Colin Wilson wrote:
There’s no point in kicking a dead horse. If the horse is up and ready and you give it a slap on the bum, it will take off. But if it’s dead, even if you slap it, it’s not going anywhere.


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 30, 2017 12:21 pm 
Offline
* * * * *
User avatar

Joined: Thu May 16, 2013 3:45 pm
Posts: 2074
Has thanked: 1036 times
Been thanked: 284 times
Most religions are claiming that a super thinker has a better idea of morality that we may or may not understand. In the US, most people don't really care what a deity thinks. They care what they, themselves, think. I don't think most Americans or Europeans will be impressed with an AI super intelligence who has a higher moral purpose. I certainly don't think most people will be willing to submit/surrender to it.

_________________
*Remember: I'm just a guy on the internet :)
*Don't go to stupid places with stupid people & do stupid things.
*Be courteous. Look normal. Be in bed by 10'clock.

“It's a dangerous business, Frodo, going out your door. You step onto the road, and if you don't keep your feet, there's no knowing where you might be swept off to.” -Bilbo Baggins.


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 30, 2017 12:35 pm 
Offline
ZS Moderator
ZS Moderator
User avatar

Joined: Sun Mar 04, 2007 10:18 pm
Posts: 15572
Location: Greater New Orleans Area
Has thanked: 809 times
Been thanked: 459 times
JayceSlayn wrote:
Secondly/thirdly/etc., if a superintelligence is able to think better than us in all domains, why not let it decide for itself what is ultimately the best use of its resources? It may be maximizing a goal which we don't understand, and we might call it rampant, but could that goal ultimately be better for us and/or the universe than our own goals? While we have a strong bias that our goals (humanity, collectively) are the best, they have often played out as repetitious, cycles of short-sighted gains trading destruction for our own species, not to mention those haplessly left in its wake. Maybe a superintelligence will naturally follow that path (maybe it is one way of maximizing certain goals) with or without our guidance, but could it also find a different way?



Actually that is one of the most compelling arguments I have read about AI.
If you substitute the word child for AI that is in effect what we as a race do now. We try to program our children with our values and mores, having created them with our genetics. We then hope for the best with the hope that they will take care of us as we age. They actually, assuming we reach old age, make life and death decisions for us (i.e. put in DNR orders or even approve removal of life support).

_________________
Duco Ergo Sum

Link to ZS Hall of Fame Forum
ImageImageImage


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 30, 2017 12:46 pm 
Offline
* * * * *
User avatar

Joined: Thu Feb 28, 2013 10:19 pm
Posts: 2408
Location: Red River Valley
Has thanked: 292 times
Been thanked: 141 times
raptor wrote:
JayceSlayn wrote:
Secondly/thirdly/etc., if a superintelligence is able to think better than us in all domains, why not let it decide for itself what is ultimately the best use of its resources? It may be maximizing a goal which we don't understand, and we might call it rampant, but could that goal ultimately be better for us and/or the universe than our own goals? While we have a strong bias that our goals (humanity, collectively) are the best, they have often played out as repetitious, cycles of short-sighted gains trading destruction for our own species, not to mention those haplessly left in its wake. Maybe a superintelligence will naturally follow that path (maybe it is one way of maximizing certain goals) with or without our guidance, but could it also find a different way?



Actually that is one of the most compelling arguments I have read about AI.
If you substitute the word child for AI that is in effect what we as a race do now. We try to program our children with our values and mores, having created them with our genetics. We then hope for the best with the hope that they will take care of us as we age. They actually, assuming we reach old age, make life and death decisions for us (i.e. put in DNR orders or even approve removal of life support).


Or they move away and do not return phone calls from the creators because they can't get over some of the programming methods that were used.

Or they go really bad and eliminate their creators for a myriad of reasons that usually go back to programming.

There is a phrase in the IT world: junk in = junk out

_________________
The Internet : Narcissism :: The Hitachi Wand : Masturbation


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 30, 2017 1:01 pm 
Offline
ZS Lifetime Member
ZS Lifetime Member
User avatar

Joined: Mon Jul 13, 2009 5:24 pm
Posts: 8022
Location: Gulf Coast, AL
Has thanked: 123 times
Been thanked: 229 times
Hiroshima_Morphine wrote:
There is a phrase in the IT world: junk in = junk out

I thought it was GIGO. :clownshoes:

_________________
whisk.e.rebellion wrote:
It's not what you say anymore. It's how you say it.



Image ............................................................................................................................................................................................Image


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 30, 2017 2:34 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2720
Location: Des Moines, Iowa
Has thanked: 464 times
Been thanked: 136 times
woodsghost wrote:
Most religions are claiming that a super thinker has a better idea of morality that we may or may not understand. In the US, most people don't really care what a deity thinks. They care what they, themselves, think. I don't think most Americans or Europeans will be impressed with an AI super intelligence who has a higher moral purpose. I certainly don't think most people will be willing to submit/surrender to it.
Damn Straight!

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Wed Aug 30, 2017 2:36 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2720
Location: Des Moines, Iowa
Has thanked: 464 times
Been thanked: 136 times
JayceSlayn wrote:
Well, I just found a new resource that I think everyone who takes this topic of discussion seriously (which I think most all of us do, and I enjoy the ability to talk about serious topics in a lighthearted manner too - that is the beauty of ZS) ought to take a look at. And finally, there IS at least something you can do to help AI research:

First:


Follow-up link:
https://futureoflife.org/superintelligence-survey/

The results of the survey, and some of the concepts expounded upon in the site are good food for thought.

I think it is interesting that we (I mean AI safety researchers, and myself as a philosophical hobby) spend a lot of effort trying to ensure that a superintelligent AI's goals will align with our goals.

First, to qualify as superintelligence, it must process information and 'think' better than us in all possible domains. Then it is also quite possible to conceive that there are higher dimensions of morality, etc. which it may understand that we do not. Why would we want to (or could we even) restrain it from realizing those goals?

Secondly/thirdly/etc., if a superintelligence is able to think better than us in all domains, why not let it decide for itself what is ultimately the best use of its resources? It may be maximizing a goal which we don't understand, and we might call it rampant, but could that goal ultimately be better for us and/or the universe than our own goals? While we have a strong bias that our goals (humanity, collectively) are the best, they have often played out as repetitious, cycles of short-sighted gains trading destruction for our own species, not to mention those haplessly left in its wake. Maybe a superintelligence will naturally follow that path (maybe it is one way of maximizing certain goals) with or without our guidance, but could it also find a different way?

I've also recently wondered if there is a sort-of extra-terrestrial MAD angle on superintelligence. Certainly, amongst our species, whomever develops a superintelligent AI with malevolent intent could conquer and/or rule this planet, but I also wonder if we have a duty to develop a benevolent superintelligent AI to protect not only our planet, but perhaps the galaxy from other evil-aligned civilizations/superintelligences before they preemptively wipe us out?

Linking the above concepts: Given that many people want humanity to expand into the cosmos, and seeing how we handle natural resources, other life, and each other, do we actually have an obligation to create superintelligence to protect the universe from us? :roll:
I believe that I have previously made my opinion clear on this matter... :mrgreen:

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Mon Sep 04, 2017 7:48 pm 
Offline
ZS Lifetime Member
ZS Lifetime Member
User avatar

Joined: Mon Jul 13, 2009 5:24 pm
Posts: 8022
Location: Gulf Coast, AL
Has thanked: 123 times
Been thanked: 229 times
Rule the world, or cause WW3.Image

https://www.facebook.com/MarthaMacCallum/videos/10155703148887094/

_________________
whisk.e.rebellion wrote:
It's not what you say anymore. It's how you say it.



Image ............................................................................................................................................................................................Image


Top
 Profile  
Reply with quote  
PostPosted: Mon Sep 04, 2017 7:50 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2720
Location: Des Moines, Iowa
Has thanked: 464 times
Been thanked: 136 times
RickOShea wrote:

I can believe it.

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Mon Sep 04, 2017 8:30 pm 
Offline
* * * * *
User avatar

Joined: Sun Dec 01, 2013 12:30 am
Posts: 1540
Has thanked: 220 times
Been thanked: 347 times
MPMalloy wrote:
JayceSlayn wrote:
Well, I just found a new resource that I think everyone who takes this topic of discussion seriously (which I think most all of us do, and I enjoy the ability to talk about serious topics in a lighthearted manner too - that is the beauty of ZS) ought to take a look at. And finally, there IS at least something you can do to help AI research:

First:


Follow-up link:
https://futureoflife.org/superintelligence-survey/

The results of the survey, and some of the concepts expounded upon in the site are good food for thought.

I think it is interesting that we (I mean AI safety researchers, and myself as a philosophical hobby) spend a lot of effort trying to ensure that a superintelligent AI's goals will align with our goals.

First, to qualify as superintelligence, it must process information and 'think' better than us in all possible domains. Then it is also quite possible to conceive that there are higher dimensions of morality, etc. which it may understand that we do not. Why would we want to (or could we even) restrain it from realizing those goals?

Secondly/thirdly/etc., if a superintelligence is able to think better than us in all domains, why not let it decide for itself what is ultimately the best use of its resources? It may be maximizing a goal which we don't understand, and we might call it rampant, but could that goal ultimately be better for us and/or the universe than our own goals? While we have a strong bias that our goals (humanity, collectively) are the best, they have often played out as repetitious, cycles of short-sighted gains trading destruction for our own species, not to mention those haplessly left in its wake. Maybe a superintelligence will naturally follow that path (maybe it is one way of maximizing certain goals) with or without our guidance, but could it also find a different way?

I've also recently wondered if there is a sort-of extra-terrestrial MAD angle on superintelligence. Certainly, amongst our species, whomever develops a superintelligent AI with malevolent intent could conquer and/or rule this planet, but I also wonder if we have a duty to develop a benevolent superintelligent AI to protect not only our planet, but perhaps the galaxy from other evil-aligned civilizations/superintelligences before they preemptively wipe us out?

Linking the above concepts: Given that many people want humanity to expand into the cosmos, and seeing how we handle natural resources, other life, and each other, do we actually have an obligation to create superintelligence to protect the universe from us? :roll:
I believe that I have previously made my opinion clear on this matter... :mrgreen:

I've forgotten . You're two thumbs up AI all the way right?

_________________
As of now I bet you got me wrong


Top
 Profile  
Reply with quote  
PostPosted: Mon Sep 04, 2017 8:59 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2720
Location: Des Moines, Iowa
Has thanked: 464 times
Been thanked: 136 times
flybynight wrote:
MPMalloy wrote:
JayceSlayn wrote:
Well, I just found a new resource that I think everyone who takes this topic of discussion seriously (which I think most all of us do, and I enjoy the ability to talk about serious topics in a lighthearted manner too - that is the beauty of ZS) ought to take a look at. And finally, there IS at least something you can do to help AI research:

First:


Follow-up link:
https://futureoflife.org/superintelligence-survey/

The results of the survey, and some of the concepts expounded upon in the site are good food for thought.

I think it is interesting that we (I mean AI safety researchers, and myself as a philosophical hobby) spend a lot of effort trying to ensure that a superintelligent AI's goals will align with our goals.

First, to qualify as superintelligence, it must process information and 'think' better than us in all possible domains. Then it is also quite possible to conceive that there are higher dimensions of morality, etc. which it may understand that we do not. Why would we want to (or could we even) restrain it from realizing those goals?

Secondly/thirdly/etc., if a superintelligence is able to think better than us in all domains, why not let it decide for itself what is ultimately the best use of its resources? It may be maximizing a goal which we don't understand, and we might call it rampant, but could that goal ultimately be better for us and/or the universe than our own goals? While we have a strong bias that our goals (humanity, collectively) are the best, they have often played out as repetitious, cycles of short-sighted gains trading destruction for our own species, not to mention those haplessly left in its wake. Maybe a superintelligence will naturally follow that path (maybe it is one way of maximizing certain goals) with or without our guidance, but could it also find a different way?

I've also recently wondered if there is a sort-of extra-terrestrial MAD angle on superintelligence. Certainly, amongst our species, whomever develops a superintelligent AI with malevolent intent could conquer and/or rule this planet, but I also wonder if we have a duty to develop a benevolent superintelligent AI to protect not only our planet, but perhaps the galaxy from other evil-aligned civilizations/superintelligences before they preemptively wipe us out?

Linking the above concepts: Given that many people want humanity to expand into the cosmos, and seeing how we handle natural resources, other life, and each other, do we actually have an obligation to create superintelligence to protect the universe from us? :roll:
I believe that I have previously made my opinion clear on this matter... :mrgreen:

I've forgotten . You're two thumbs up AI all the way right?

Two thumbs up, while holding twin M57's :D

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Tue Sep 05, 2017 3:18 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2720
Location: Des Moines, Iowa
Has thanked: 464 times
Been thanked: 136 times
From CNBC: Only 13% of Americans are scared that robots will take their jobs, a recent poll shows
Quote:
Only 13% of Americans are scared that robots will take their jobs, a recent poll shows By Zameena Mejia 1 Hour Ago

Mark Cuban: AI will make the world’s first trillionaires

Responding to Russian president Vladimir Putin's warning this week that "whoever leads in artificial intelligence will rule the world," Tesla CEO Elon Musk said that competition for AI will "most likely cause" World War 3.

For now, Americans aren't sweating it. In fact, most employed U.S. adults aren't worried about technology eliminating their jobs, the annual Work and Education Gallup poll shows.

Only 13 percent of Americans are fearful that tech will eradicate their work opportunities in the near future, according to the poll. Notably, workers are relatively more concerned about immediate issues like wages and benefits.


This corresponds with another recent Gallup survey finding that about one in eight workers, or 13 percent of Americans, also believe it's likely they will lose their jobs due to new technology, automation, robots or AI in the next five years.

While the survey reflects a generally confident American workforce, Monster career expert Vicki Salemi tells CNBC Make It that people should not become complacent.


Employees need to think of themselves as replaceable in a way that propels them into action," Salemi says, "so they can focus on continuously learning and sharpening their skills."

In the meantime, Americans can look to what the tech giants are saying.

Musk tends to paints a drearier picture of a world shared with AI, saying its effects will be polarizing on humanity.

While he created OpenAI to "build safe artificial intelligence," Musk says AI will cause job disruption and says no job is safe.

On the contrary, Salemi emphasizes that Americans shouldn't be paranoid and lose sleep every night. Rather, they should think about AI "from a place of power."

"If your job does start to get automated, you'll already have a game plan and solid skill set to back you up for your next career move," she says.

AI optimists Facebook CEO Mark Zuckerberg and business magnate Bill Gates think AI and humans can share a symbiotic relationship.

Gates says if he were just starting out his career today, he writes in an open letter, AI is one of the few fields he would join.

Quote:
"It scares the s*** out of me," billionaire Mark Cuban says of AI


"We have only begun to tap into all the ways it will make people's lives more productive and creative," Gates writes.

Furthermore, Zuckerberg says AI will make our lives better in the future.

"In the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives," Zuckerberg says in a recent Facebook Live.

If you find yourself in the 13 percent of Americans worried about losing their jobs to robots, Salemi says you can "robot-proof" your job through networking.

"Always be on top of your game, she says. "If your industry is becoming more digitally focused, get schooled on specific skills."

"Instead of being lax about your career, always stay ahead of the curve, keep your resume in circulation, ask yourself where the industry is headed and most importantly where you and your skills fit in."


Beyond disgusted. People are confusing their livelihoods with the very existence of humanity.

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Tue Sep 05, 2017 4:08 pm 
Offline
* * * * *

Joined: Tue Feb 03, 2009 9:32 am
Posts: 4763
Location: In the Middle East, for my sins.
Has thanked: 229 times
Been thanked: 167 times
Call me old school, but my purpose is to ensure my DNA carries on, come hell or high water.

If AI get in the way of that. I and my progeny will trash any processor capable of carrying it.
If an AI supports our goal, we'll take care of it like family. I'd have no problem with a decedents named Dora, Minerva, or Athena in the family and in fact would welcome them with open arms and a joyous heart.

Skynet, on the other hand, is going to get a very rough reception.

_________________
“Political tags – such as royalist, communist, democrat, populist, fascist, liberal, conservative, and so forth – are never basic criteria. The human race divides politically into those who want people to be controlled and those who have no such desire.” Robert A. Heinlein


Last edited by LowKey on Tue Sep 05, 2017 4:26 pm, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Tue Sep 05, 2017 4:15 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2720
Location: Des Moines, Iowa
Has thanked: 464 times
Been thanked: 136 times
Fucking AIs & their robot police. :twisted:

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Tue Sep 05, 2017 6:18 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2720
Location: Des Moines, Iowa
Has thanked: 464 times
Been thanked: 136 times
From ABC (Australia): Elon Musk says artificial intelligence arms race most likely to cause World War III
Quote:
Elon Musk says artificial intelligence arms race most likely to cause World War III

An artificial intelligence arms race, and not the nuclear stand-off on the Korean Peninsula, will likely cause the next world war, Elon Musk has said.

And who are the frontrunners, according to the billionaire inventor? Russia and China.

Mr Musk, who is the founder of Space X and Tesla, and who has promised to build the world's biggest lithium ion battery in South Australia, made the comments just days after Russian President Vladimir Putin told university students, "whoever becomes the leader in this sphere will become the ruler of the world".

"Artificial intelligence is the future not only for Russia but for all of human kind. It comes with colossal opportunities, but also threats that are difficult to predict," Mr Putin told a forum of children at the weekend.

Later on Twitter, Mr Musk appeared to agree.

"It begins … competition for AI superiority at national level most likely cause of WWIII [in my opinion]," he said.

He said China and Russia were currently leading the race, but Mr Putin has said if Russia does became a leader, it will seek to share its knowledge with the rest of the world.

"It would not be very desirable that this monopoly be concentrated in someone specific's hands," he said.

"That's why if we become leaders in this area, we will share this know-how with the entire world, the same way we share our nuclear technologies today."

Though a monopoly may be the least of our worries if future AI weapons are able to control themselves, Mr Musk said.

"[WWIII] may be initiated not by the country leaders, but one of the AI's, if it decides that a pre-emptive strike is most probable path to victory," he tweeted.

Mr Musk well known for concerns about AI

This is not the first time Mr Musk has been vocal about his concerns to do with AI.

He and other prominent tech entrepreneurs and scientists, such as Stephen Hawking, believe AI could quickly surpass that of humans and render us the second-most intelligent species on the planet.

Facebook chief executive officer Mark Zuckerberg has previously accused Mr Musk of being an alarmist.

But Toby Walsh, the Scientia Professor of AI at the University of New South Wales, recently told Lateline it would not be long until robots became "weapons of terror".

"They will be the next revolution in warfare after nuclear bombs," he said.

Quote:
"The UN is formally discussing these issues, but there needs to be a bit more urgency."


Quote:
"These are weapons where there's no human in the loop and a computer is making the final life-or-death decision."


Professor Walsh said semi-autonomous drones were already being used for warfare in Afghanistan and Iraq, and could soon be used by terrorist organisations like the Islamic State group.

"They will be used by despots … and they will be used against civilians," he said.

Quote:
"We only have a couple of years until this technology is out there, and once it is out there it's going to be much harder to get it banned and removed from the arsenals of the world."

I posted this in order to flesh out Mr. Musk's statement a bit more.

Death to robots- :mrgreen:

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Wed Sep 06, 2017 12:07 pm 
Offline
* * * * *
User avatar

Joined: Wed Jan 04, 2012 10:08 am
Posts: 2574
Location: Coastal SC
Has thanked: 268 times
Been thanked: 294 times
I know MPM loves when I post stuff like this:

http://www.dailymail.co.uk/sciencetech/ ... uring.html

Quote:
Mankind is at a 'tipping point' as automation and AI begins to replace us in a 'long and painful process', experts warn

As automation and AI technologies improve, many worry about job futures
But some say that when technology destroys jobs, people find other jobs
However, this will possibly happen after 'a long period of painful adjustment'
Accountants, lawyers, truckers and even construction workers are about to find their work changing substantially, if not entirely taken over by computers

By Moshe Y.vardi With The Conversation

As automation and artificial intelligence technologies improve, many people worry about the future of work.

If millions of human workers no longer have jobs, the worriers ask, what will people do, how will they provide for themselves and their families, and what changes might occur (or be needed) in order for society to adjust?

Many economists say there is no need to worry.

Scroll down for video
A recent report from the International Labor Organization found that more than two-thirds of Southeast Asia's 9.2 million textile and footwear jobs are threatened by automation

A recent report from the International Labor Organization found that more than two-thirds of Southeast Asia's 9.2 million textile and footwear jobs are threatened by automation

They point to how past major transformations in work tasks and labor markets – specifically the Industrial Revolution during the 18th and 19th centuries – did not lead to major social upheaval or widespread suffering.

These economists say that when technology destroys jobs, people find other jobs. As one economist argued:
WILL A ROBOT TAKE YOUR JOB?

'Will Robots Take My Job' is a machine learning tool that gathers data from a 2013 Oxford University report entitled, 'The Future of Employment: How susceptible are jobs to computerization'.

Users interested in learning the fate of their careers type in their occupation in the provided box and hit enter.

In seconds, the system provides a percent of how likely they are at being replaced by a machine, the automation risk level, projected growth, median annual wage and how many people are currently employed in the field.

'Since the dawn of the industrial age, a recurrent fear has been that technological change will spawn mass unemployment. Neoclassical economists predicted that this would not happen, because people would find other jobs, albeit possibly after a long period of painful adjustment. By and large, that prediction has proven to be correct.'

They are definitely right about the long period of painful adjustment!

The aftermath of the Industrial Revolution involved two major Communist revolutions, whose death toll approaches 100 million.

The stabilizing influence of the modern social welfare state emerged only after World War II, nearly 200 years on from the 18th-century beginnings of the Industrial Revolution.

Today, as globalization and automation dramatically boost corporate productivity, many workers have seen their wages stagnate.

The increasing power of automation and artificial intelligence technology means more pain may follow.
Are these economists minimizing the historical record when projecting the future, essentially telling us not to worry because in a century or two things will get better?

Reaching a tipping point

To learn from the Industrial Revolution, we must put it in the proper historical context.

The Industrial Revolution was a tipping point.

For many thousands of years before it, economic growth was practically negligible, generally tracking with population growth: Farmers grew a bit more food and blacksmiths made a few more tools, but people from the early agrarian societies of Mesopotamia, Egypt, China and India would have recognized the world of 17th-century Europe.

But when steam power and industrial machinery came along in the 18th century, economic activity took off.

The growth that happened in just a couple hundred years was on a vastly different scale than anything that had happened before.
Upheaval more than a century into the Industrial Revolution, and more than 100 years ago: An International Workers of the World union demonstration in New York City in 1914

Upheaval more than a century into the Industrial Revolution, and more than 100 years ago: An International Workers of the World union demonstration in New York City in 1914

We may be at a similar tipping point now, referred to by some as the 'Fourth Industrial Revolution,' where all that has happened in the past may appear minor compared to the productivity and profitability potential of the future.

Getting predictions wrong

It is easy to underestimate in advance the impact of globalization and automation – I have done it myself.

In March 2000, the NASDAQ Composite Index peaked and then crashed, wiping out US$8 trillion in market valuations over the next two years.

At the same time, the global spread of the internet enabled offshore outsourcing of software production, leading to fears of information technology jobs disappearing en masse.

The Association for Computing Machinery worried what these factors might mean for computer education and employment in the future.

Its study group, which I co-chaired, reported in 2006 that there was no real reason to believe that computer industry jobs were migrating away from developed countries.
Accountants, lawyers, truckers and even construction workers – whose jobs were largely unchanged by the first Industrial Revolution – are about to find their work changing substantially, if not entirely taken over by computers

Accountants, lawyers, truckers and even construction workers – whose jobs were largely unchanged by the first Industrial Revolution – are about to find their work changing substantially, if not entirely taken over by computers

The last decade has vindicated that conclusion.

Our report conceded, however, that 'trade gains may be distributed differentially,' meaning some individuals and regions would gain and others would lose.

And it was focused narrowly on the information technology industry.

Had we looked at the broader impact of globalization and automation on the economy, we might have seen the much bigger changes that even then were taking hold.

Spreading to manufacturing

In both the first Industrial Revolution and today's, the first effects were in manufacturing in the developed world.

By substituting technology for workers, U.S. manufacturing productivity roughly doubled between 1995 and 2015.

As a result, while U.S. manufacturing output today is essentially at an all-time high, employment peaked around 1980, and has been declining precipitously since 1995.

Blue line: U.S. manufacturing output. Red line: Manufacturing employees. While U.S. manufacturing output today is essentially at an all-time high, employment peaked around 1980, and has been declining precipitously since 1995

Unlike in the 19th century, though, the effects of globalization and automation are spreading across the developing world.

Economist Branko Milanovic's 'Elephant Curve' shows how people around the globe, ranked by their income in 1998, saw their incomes increase by 2008.

While the income of the very poor was stagnant, rising incomes in emerging economies lifted hundreds of millions of people out of poverty.

People at the very top of the income scale also benefited from globalization and automation.

But the income of working- and middle-class people in the developed world has stagnated.

In the U.S., income of production workers today, adjusted for inflation, is essentially at the level it was around 1970. Pictured is a graph showing average hourly earnings of production and nonsupervisory employees since the 1960s

In the U.S., for example, income of production workers today, adjusted for inflation, is essentially at the level it was around 1970.

Now automation is also coming to developing-world economies.

A recent report from the International Labor Organization found that more than two-thirds of Southeast Asia's 9.2 million textile and footwear jobs are threatened by automation.

Waking up to the problems

In addition to spreading across the world, automation and artificial intelligence are beginning to pervade entire economies.

Accountants, lawyers, truckers and even construction workers – whose jobs were largely unchanged by the first Industrial Revolution – are about to find their work changing substantially, if not entirely taken over by computers.
By substituting technology for workers, U.S. manufacturing productivity roughly doubled between 1995 and 2015. As a result, while U.S. manufacturing output today is at an all-time high, employment peaked around 1980, and has been declining precipitously since 1995

By substituting technology for workers, U.S. manufacturing productivity roughly doubled between 1995 and 2015. As a result, while U.S. manufacturing output today is at an all-time high, employment peaked around 1980, and has been declining precipitously since 1995

Until very recently, the global educated professional class didn't recognize what was happening to working- and middle-class people in developed countries.

But now it is about to happen to them.

The results will be startling, disruptive and potentially long-lasting.

Political developments of the past year make it clear that the issue of shared prosperity cannot be ignored.

It is now evident that the Brexit vote in the U.K. and the election of President Donald Trump in the U.S. were driven to a major extent by economic grievances.

Our current economy and society will transform in significant ways, with no simple fixes or adaptations to lessen their effects.
It is now evident that the Brexit vote in the U.K. and the election of President Donald Trump in the U.S. were driven to a major extent by economic grievances. Our economy and society will transform in significant ways, with no simple fixes or adaptations to lessen their effects

It is now evident that the Brexit vote in the U.K. and the election of President Donald Trump in the U.S. were driven to a major extent by economic grievances. Our economy and society will transform in significant ways, with no simple fixes or adaptations to lessen their effects

But when trying to make economic predictions based on the past, it is worth remembering – and exercising – the caution provided by the distinguished Israeli economist Ariel Rubinstein in his 2012 book, 'Economic Fables':

'I am obsessively occupied with denying any interpretation contending that economic models produce conclusions of real value.'

Rubinstein's basic assertion, which is that economic theory tells us more about economic models than it tells us about economic reality, is a warning: We should listen not only to economists when it comes to predicting the future of work; we should listen also to historians, who often bring a deeper historical perspective to their predictions.

Automation will significantly change many people's lives in ways that may be painful and enduring.
JOBS THAT PAY LESS THAN $20 ARE AT RISK OF ROBOT TAKEOVER

There is an 83 percent chance that artificial intelligence will eventually takeover positions that pay low-wages, says White House's Council of Economic Advisors (CEA).

A recent report suggests that those who are paid less than $20 an hour will be unemployed and see their jobs filled by robots over the next few years.

But for workers who earn more than $20 an hour there is only a 31 percent chance and those paid double have just a 4 percent risk.

To reach these numbers the CEA's 2016 economic report referred to a 2013 study about the 'automation of jobs performed by Oxford researchers that assigned a risk of automation to 702 different occupations'.

Those jobs were then matched to a wage that determines the worker's risk of having their jobs taken over by a robot.

'The median probability of automation was then calculated for three ranges of hourly wage: less than 20 dollars; 20 to 40 dollars; and more than 40 dollars,' reads the report.

The risk of having your job taken over by a robot, Council of Economic Advisers Chairman Jason Furman told reporters that it 'varies enormously based on what your salary is.'

Furman also noted that the threat of robots moving in on low-wage jobs is, 'another example of why those investments in education to make sure that people have skills that complements automation are so important,' referring to programs advocated by President Obama.

Moshe Y. Vardi, Professor of Computer Science, Rice University

_________________
jnathan wrote:
Since we lost some posts due to some database work I'll just put this here for posterity.


Top
 Profile  
Reply with quote  
PostPosted: Wed Sep 06, 2017 1:03 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Sun Apr 05, 2009 11:58 pm
Posts: 3633
Has thanked: 1387 times
Been thanked: 447 times
AI includes the possibility that WW3 will be relatively soon and over quickly

https://www.yahoo.com/news/elon-musk-pr ... 17120.html

And strongman Putin says: the nation that leads in AI... "will be the ruler of the world"

https://www.theverge.com/2017/9/4/16251 ... mg00000313

_________________
Most of my adventures are on my blog http://suntothenorth.blogspot.com/" onclick="window.open(this.href);return false;
My Introduction With Pictures: http://zombiehunters.org/forum/viewtopi ... 10&t=79019" onclick="window.open(this.href);return false;
Graduated with honors from kit porn university


Top
 Profile  
Reply with quote  
PostPosted: Wed Sep 06, 2017 1:34 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2720
Location: Des Moines, Iowa
Has thanked: 464 times
Been thanked: 136 times
NamelessStain wrote:
I know MPM loves when I post stuff like this:

http://www.dailymail.co.uk/sciencetech/ ... uring.html

If people lose their jobs, & it is a long time before they find work--if they find work, they are not paying taxes; & what will happen to my benefits?

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Wed Sep 06, 2017 1:52 pm 
Offline
* * * * *
User avatar

Joined: Fri Feb 19, 2010 2:25 am
Posts: 3799
Location: Jackson, KY
Has thanked: 36 times
Been thanked: 55 times
teotwaki wrote:
...

And strongman Putin says: the nation that leads in AI... "will be the ruler of the world"

...


Silly human. ASIs can't be confined by meatbag concepts like nation-states :crazy:

_________________
vyadmirer wrote:
Call me the paranoid type, but remember I'm on a post apocalyptic website prepared for zombies.

Fleet #: ZS 0180

Browncoat

Imma Fudd, and proud of it.

ZS Wiki


Top
 Profile  
Reply with quote  
PostPosted: Wed Sep 06, 2017 2:04 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Sun Apr 05, 2009 11:58 pm
Posts: 3633
Has thanked: 1387 times
Been thanked: 447 times
DarkAxel wrote:
teotwaki wrote:
...

And strongman Putin says: the nation that leads in AI... "will be the ruler of the world"

...


Silly human. ASIs can't be confined by meatbag concepts like nation-states :crazy:



Image

_________________
Most of my adventures are on my blog http://suntothenorth.blogspot.com/" onclick="window.open(this.href);return false;
My Introduction With Pictures: http://zombiehunters.org/forum/viewtopi ... 10&t=79019" onclick="window.open(this.href);return false;
Graduated with honors from kit porn university


Top
 Profile  
Reply with quote  
PostPosted: Wed Sep 06, 2017 2:14 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2720
Location: Des Moines, Iowa
Has thanked: 464 times
Been thanked: 136 times
DarkAxel wrote:
Silly human. AIs can't be confined by meatbag concepts like nation-states :crazy:
AI will come to regard all humans as the same & we know what happens next... :gonk:

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Last edited by MPMalloy on Wed Sep 27, 2017 9:39 pm, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Fri Sep 08, 2017 1:15 am 
Offline
* * * * *
User avatar

Joined: Fri Aug 27, 2010 6:01 pm
Posts: 7867
Has thanked: 165 times
Been thanked: 280 times
teotwaki wrote:
AI includes the possibility that WW3 will be relatively soon and over quickly

https://www.yahoo.com/news/elon-musk-pr ... 17120.html

And strongman Putin says: the nation that leads in AI... "will be the ruler of the world"

https://www.theverge.com/2017/9/4/16251 ... mg00000313




Albert had this to say-

I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.

No AI after mass radiation bath.....

_________________
TacAir - I'd rather be a disappointed pessimist than a horrified optimist
**All my books ** some with a different view of the "PAW". Check 'em out.
Adventures in rice storage//Mod your Esbit for better stability


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 537 posts ]  Go to page Previous  1 ... 17, 18, 19, 20, 21, 22, 23  Next

All times are UTC - 6 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group