It is currently Sun Nov 19, 2017 4:40 pm

All times are UTC - 6 hours [ DST ]




Post new topic Reply to topic  [ 620 posts ]  Go to page Previous  1 ... 20, 21, 22, 23, 24, 25, 26  Next
Author Message
PostPosted: Tue Oct 03, 2017 11:03 am 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2906
Location: Des Moines, Iowa
Has thanked: 524 times
Been thanked: 141 times
ZombieGranny wrote:
Waiting for Christmas dinner, 1931 New York.
Image

It happened before, it can happen again. :(

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Share on FacebookShare on TwitterShare on TumblrShare on Google+
Top
 Profile  
Reply with quote  
PostPosted: Thu Oct 05, 2017 5:37 am 
Offline
* * * * *
User avatar

Joined: Wed Jan 04, 2012 10:08 am
Posts: 2592
Location: Coastal SC
Has thanked: 268 times
Been thanked: 311 times
Link:

https://www.theguardian.com/technology/ ... -us-survey


Quote:
More than 70% of US fears robots taking over our lives, survey finds

As Silicon Valley heralds progress on self-driving cars and robot carers, much of the rest of the country is worried about machines taking control of human tasks

Olivia Solon in San Francisco

Wednesday 4 October 2017 13.15 EDT
Last modified on Wednesday 4 October 2017 16.28 EDT

Silicon Valley celebrates artificial intelligence and robotics as fields that have the power to improve people’s lives, through inventions like driverless cars and robot carers for the elderly.

That message isn’t getting through to the rest of the country, where more than 70% of Americans express wariness or concern about a world where machines perform many of the tasks done by humans, according to Pew Research.
Will robots bring about the end of work?
Read more

The findings have wide-reaching implications for technology companies working in these fields and indicates the need for greater public hand-holding.

“Ordinary Americans are very wary and concerned about the growing trend in automation and place a lot of value in human decision-making,” said Aaron Smith, the author of the research, which surveyed more than 4,000 US adults. “They are not incredibly excited about machines taking over those responsibilities.”

Pew gauged public perception of automation technologies by presenting respondents with four scenarios, including the development of completely driverless cars; a future in which machines replace many human jobs; the possibility of fully autonomous robot carers; and the possibility that a computer program could evaluate and select job candidates with no human oversight .

According to the findings, 72% of Americans are very or somewhat worried about a future where robots and computers are capable of performing many human jobs – more than double the 33% of people who were enthusiastic about the prospect. Seventy-six per cent are concerned that automation of jobs will exacerbate economic inequality and a similar share (75%) anticipate that the economy will not create many new, better-paying jobs for those human workers who lose their jobs to machines.

People are not buying the safety argument about driverless vehicles. There’s widespread concern
Aaron Smith, report author

One of the most visible examples of automation that’s likely to disrupt daily life is driverless vehicles. There’s a broad agreement among proponents of the technology that driverless cars will be safer than those driven by humans, who are often distracted, drunk or falling asleep at the wheel.

The American public disagrees.

“People are not buying the safety argument about driverless vehicles,” Smith said. “There’s widespread concern about being on the roads with them, which conflicts with what is consensus in the technology world.”

A slim majority of Americans (54%) express more worry than enthusiasm for the development of driverless vehicles, with 30% expecting that they would lead to an increase in road fatalities. Fifty-six per cent said they would not want to ride in one if given the opportunity, citing a lack of trust in the technology or an unwillingness to cede control to a machine in a potentially life-or-death situation.

Another unexpected finding was the vehement opposition to robots making hiring decisions, despite the fact that such technology is already starting to creep into the hiring process as well as other areas such as assessing individuals for loans or parole from prison. Proponents say that using AI can make these decisions less biased, but the public is not convinced.
A Google self-driving car maneuvers through the streets of in Washington DC. More than half of Americans worry about the technology.
A Google self-driving car maneuvers through the streets of in Washington DC. More than half of Americans worry about the technology. Photograph: Karen Bleier/AFP/Getty Images

Seventy-six per cent of respondents said they would not want to apply for jobs that use such a computer program to make hiring decisions.

“A computer cannot measure the emotional intelligence or intangible assets that many humans have,” said one 22-year-old female respondent. “Not every quality can be quantitatively measured by a computer when hiring someone; there is much more learned by face-to-face interactions.”

Smith said: “It speaks to the general lack of recognition of just how widespread algorithmic decision making is in our lives by the average people in the street.”

The survey also asked people about their attitudes towards existing workforce technologies such as social media, industrial robots and technologies that help customers serve themselves without the assistance of humans. The findings revealed a big split between college educated respondents (typically white collar workers) and those who didn’t attend college (typically blue collar workers).

“White collar workers see tech as something positive that helps them get ahead and has improved their opportunities for career advancement, giving them agency to do their jobs better, make more money and get promotions,” said Smith.
Deus ex machina: former Google engineer is developing an AI god
Read more

“When we asked the same questions of working class folk, you don’t get the same sense that it’s something that is helpful to them or improves access to career opportunities.”

These social factors play into people’s attitudes towards the coming wave of automation technologies.

“Those folks who are optimistic hope it will take over the dull and boring work we hate and create new categories of work for humans to do,” said Smith, “but the American public does not buy the notion that it will be good for everyone.”

Three-quarters of Americans expect that machines doing human jobs will increase inequality between the rich and the poor.

“They believe that a small number of people do well and everyone else loses their jobs to the robots,” said Smith.

_________________
jnathan wrote:
Since we lost some posts due to some database work I'll just put this here for posterity.


Top
 Profile  
Reply with quote  
PostPosted: Thu Oct 05, 2017 9:04 am 
Online
* * *
User avatar

Joined: Wed Mar 05, 2008 3:07 pm
Posts: 512
Location: North Carolina
Has thanked: 9 times
Been thanked: 40 times
NamelessStain wrote:
Link:

https://www.theguardian.com/technology/ ... -us-survey


Quote:
More than 70% of US fears robots taking over our lives, survey finds

As Silicon Valley heralds progress on self-driving cars and robot carers, much of the rest of the country is worried about machines taking control of human tasks

Olivia Solon in San Francisco

Wednesday 4 October 2017 13.15 EDT
Last modified on Wednesday 4 October 2017 16.28 EDT

Silicon Valley celebrates artificial intelligence and robotics as fields that have the power to improve people’s lives, through inventions like driverless cars and robot carers for the elderly.

That message isn’t getting through to the rest of the country, where more than 70% of Americans express wariness or concern about a world where machines perform many of the tasks done by humans, according to Pew Research.
Will robots bring about the end of work?
Read more

The findings have wide-reaching implications for technology companies working in these fields and indicates the need for greater public hand-holding.

“Ordinary Americans are very wary and concerned about the growing trend in automation and place a lot of value in human decision-making,” said Aaron Smith, the author of the research, which surveyed more than 4,000 US adults. “They are not incredibly excited about machines taking over those responsibilities.”

Pew gauged public perception of automation technologies by presenting respondents with four scenarios, including the development of completely driverless cars; a future in which machines replace many human jobs; the possibility of fully autonomous robot carers; and the possibility that a computer program could evaluate and select job candidates with no human oversight .

According to the findings, 72% of Americans are very or somewhat worried about a future where robots and computers are capable of performing many human jobs – more than double the 33% of people who were enthusiastic about the prospect. Seventy-six per cent are concerned that automation of jobs will exacerbate economic inequality and a similar share (75%) anticipate that the economy will not create many new, better-paying jobs for those human workers who lose their jobs to machines.

People are not buying the safety argument about driverless vehicles. There’s widespread concern
Aaron Smith, report author

One of the most visible examples of automation that’s likely to disrupt daily life is driverless vehicles. There’s a broad agreement among proponents of the technology that driverless cars will be safer than those driven by humans, who are often distracted, drunk or falling asleep at the wheel.

The American public disagrees.

“People are not buying the safety argument about driverless vehicles,” Smith said. “There’s widespread concern about being on the roads with them, which conflicts with what is consensus in the technology world.”

A slim majority of Americans (54%) express more worry than enthusiasm for the development of driverless vehicles, with 30% expecting that they would lead to an increase in road fatalities. Fifty-six per cent said they would not want to ride in one if given the opportunity, citing a lack of trust in the technology or an unwillingness to cede control to a machine in a potentially life-or-death situation.

Another unexpected finding was the vehement opposition to robots making hiring decisions, despite the fact that such technology is already starting to creep into the hiring process as well as other areas such as assessing individuals for loans or parole from prison. Proponents say that using AI can make these decisions less biased, but the public is not convinced.
A Google self-driving car maneuvers through the streets of in Washington DC. More than half of Americans worry about the technology.
A Google self-driving car maneuvers through the streets of in Washington DC. More than half of Americans worry about the technology. Photograph: Karen Bleier/AFP/Getty Images

Seventy-six per cent of respondents said they would not want to apply for jobs that use such a computer program to make hiring decisions.

“A computer cannot measure the emotional intelligence or intangible assets that many humans have,” said one 22-year-old female respondent. “Not every quality can be quantitatively measured by a computer when hiring someone; there is much more learned by face-to-face interactions.”

Smith said: “It speaks to the general lack of recognition of just how widespread algorithmic decision making is in our lives by the average people in the street.”

The survey also asked people about their attitudes towards existing workforce technologies such as social media, industrial robots and technologies that help customers serve themselves without the assistance of humans. The findings revealed a big split between college educated respondents (typically white collar workers) and those who didn’t attend college (typically blue collar workers).

“White collar workers see tech as something positive that helps them get ahead and has improved their opportunities for career advancement, giving them agency to do their jobs better, make more money and get promotions,” said Smith.
Deus ex machina: former Google engineer is developing an AI god
Read more

“When we asked the same questions of working class folk, you don’t get the same sense that it’s something that is helpful to them or improves access to career opportunities.”

These social factors play into people’s attitudes towards the coming wave of automation technologies.

“Those folks who are optimistic hope it will take over the dull and boring work we hate and create new categories of work for humans to do,” said Smith, “but the American public does not buy the notion that it will be good for everyone.”

Three-quarters of Americans expect that machines doing human jobs will increase inequality between the rich and the poor.

“They believe that a small number of people do well and everyone else loses their jobs to the robots,” said Smith.

AI automation of nearly everything is a complex and far-reaching concept, and I think people are right to be wary of it, but I think that they might also not perceive how far along we are already.

Coal mining, steel smelting, car manufacturing, etc. are all examples of huge industries that are still more or less thriving today, but which have been vastly automated over time. A mine/factory of similar output for any of the above would have probably employed 10x as many people a century ago. Forklifts alone have greatly reduced the need for human labor in material handling for all kinds industries. The only thing that offsets the reduction in labor need is the increase in total output for each of these sectors, and creation of different kinds of jobs (mechanic, technician, engineer, etc.). Of course, we also often warn that unlimited growth is unsustainable, and in the face of fundamental changes to society, it is likely that a future with greater automation could look even more alien to us than the changes of the last 100 years would look to someone from then.

I am one of the 'techies' that buys into the eventual automation of vehicles. Not only it makes economic sense for businesses, but it seems to make ethical sense, in the previously-stated argument that "they don't need to be perfect, just better than humans". And humans are unfortunately a very low bar to hurdle for vehicle operation. The above article states that people are worried about handing over responsibility to a machine for life-and-death decisions, but I'd caution them to be just as wary of ceding control to their fellow humans as well. Cars are likely to be one of the most powerful things most people will ever have complete, direct control over (100 HP * 745.7 W/HP = 74.6 kW, enough to power ~ 50 average homes, and don't even start on how much energy is in a 3,000 lb chunk of metal travelling 60 mph), a fact that I think most people don't take seriously enough to begin with. Yes, situations will arise where automated vehicles cause serious accidents that might have been avoided by human drivers, and someone is going to take heat for that, but from a global perspective, it is likely to be a tiny fraction of what devastation is wrought on a daily basis by human drivers today.

And while computers/algorithms are already responsible for a large portion of decision making in other areas, like stock trading, mutual fund allocation, job applicant screening, loan application (credit score evaluation), background checks, etc., there are still some good reasons to push back against their claims of being less-biased than humans. Generally, supervised learning models will learn to make decisions based on the information that you've fed them during training. If a hiring program is fed training data from a company's historical hiring practices, salary data, and so forth, it is likely to be exactly as biased as the aggregate of all the hiring managers before it. We have to be careful how we employ algorithms that we are responsible for teaching, because we may teach them to be a little too much like us as well.

As for inequality, I think there will certainly be winners and losers from the continued automation of everything, and the inequality between the haves and have-nots could rise even further. I hope we will find a way to right that balance, but history hasn't given me much to be optimistic about in that regard. The only other thing I can hope will result from further technological progress is that we'll continue to see the trend of improving quality of life en-masse (i.e. people living in poverty today would certainly still be recognisable to people 100 years ago, but would generally still be better off).

_________________
Rahul Telang wrote:
If you don’t have a plan in place, you will find different ways to screw it up

Colin Wilson wrote:
There’s no point in kicking a dead horse. If the horse is up and ready and you give it a slap on the bum, it will take off. But if it’s dead, even if you slap it, it’s not going anywhere.


Top
 Profile  
Reply with quote  
PostPosted: Wed Oct 11, 2017 11:35 pm 
Offline

Joined: Tue Oct 10, 2017 11:00 am
Posts: 2
Has thanked: 0 time
Been thanked: 1 time
It's Time to Make Way for Robots In Your Industry: The blog's title itself is that scary, but there are some optimistic view in its content except for robots replacing humans in some industries.

Innovation can really be a two-edged blade: robots can be exposed to extreme conditions where humans can't — this will enable humanity to discover what hasn't been, which any fruits of the endeavor can lead to further advancement; on the other hand, the thing we hope that shouldn't even happen is the day these robots become more intelligent than human beings.

So, for now, since such technology is eyeing on work that are easily automated, we should develop higher level skills that are hard to be replaced by automation.


Top
 Profile  
Reply with quote  
PostPosted: Wed Oct 18, 2017 6:38 am 
Offline
* * * * *
User avatar

Joined: Wed Jan 04, 2012 10:08 am
Posts: 2592
Location: Coastal SC
Has thanked: 268 times
Been thanked: 311 times
Sorry, MPM. The article is WAY too long to post.

https://www.bloomberg.com/news/features ... -the-world

_________________
jnathan wrote:
Since we lost some posts due to some database work I'll just put this here for posterity.


Top
 Profile  
Reply with quote  
PostPosted: Wed Oct 18, 2017 1:07 pm 
Offline
* * * * *
User avatar

Joined: Wed Jan 04, 2012 10:08 am
Posts: 2592
Location: Coastal SC
Has thanked: 268 times
Been thanked: 311 times
what could go wrong?!

http://www.dailymail.co.uk/sciencetech/ ... tself.html

_________________
jnathan wrote:
Since we lost some posts due to some database work I'll just put this here for posterity.


Top
 Profile  
Reply with quote  
PostPosted: Fri Oct 20, 2017 8:11 am 
Offline
ZS Member
ZS Member
User avatar

Joined: Thu Jun 04, 2015 3:56 pm
Posts: 904
Location: USA Mid Atlantic
Has thanked: 2198 times
Been thanked: 169 times
Quote:
buys into the eventual automation of vehicles.

I thought about this driving home this week and thought to make mental notes about how many people I observed using a small handheld computer (some people still say "phone") while operating their vehicle. Surprisingly, it was just about everyone. At stoplights, some people need a prompting to even remember that they need to move forward. Still, I think the problem that exists is the unpredictability of the other driver and when/where the illogical decision of the human saves their life or causes more damage.

Although I am not a physicist, I also considered Newton's 3d Law, as I often do, and came across this article about connectivity causing dis-connectivity.
Has the Smartphone Destroyed a Generation?

Finally, I sought to find the most current available open source research on AI to gain a better understanding of the risks/values associated and came across this excellent report. I read the entire thing and thought to share it. It should move the conversation along.
Download the Paper at the bottom

_________________
It's not what you look at that matters, it's what you see.
Henry David Thoreau


Top
 Profile  
Reply with quote  
PostPosted: Fri Oct 20, 2017 9:38 am 
Offline
* * * * *
User avatar

Joined: Wed Jan 04, 2012 10:08 am
Posts: 2592
Location: Coastal SC
Has thanked: 268 times
Been thanked: 311 times
Again, sorry MPM. Way too much to quote here:

https://www.mckinsey.com/business-funct ... y-cant-yet

All kinds of data graphs etc.

_________________
jnathan wrote:
Since we lost some posts due to some database work I'll just put this here for posterity.


Top
 Profile  
Reply with quote  
PostPosted: Fri Oct 20, 2017 1:24 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2906
Location: Des Moines, Iowa
Has thanked: 524 times
Been thanked: 141 times
NamelessStain wrote:
Again, sorry MPM. Way too much to quote here:

https://www.mckinsey.com/business-funct ... y-cant-yet

All kinds of data graphs etc.

Good article.

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Tue Oct 24, 2017 8:01 am 
Online
* * *
User avatar

Joined: Wed Mar 05, 2008 3:07 pm
Posts: 512
Location: North Carolina
Has thanked: 9 times
Been thanked: 40 times
Time for some more videos...about game AIs learning via tabula-rasa, unsupervised, self-play.

First, Google's new Alpha Go Zero beat the previous Alpha Go AIs, which themselves beat the best human players a little while ago. It discovered new strategies unknown to humans, and its algorithms require less compute power than previous Go AIs. Background: Go is generally considered to be the most difficult board game to create a bot for, since the probability and play space is so large.


Second, an AI beat all the best DOTA 2 eSports players in 1v1 games, and is now training to scale to 5v5 games. FYI: If you've never heard of eSports before, yes, it is a real thing, and yes, there is real big money to be made (eSports primer series by VICE: http://www.youtube.com/watch?v=of1k5AwiNxI).


I find the eSports player's lack of confidence in the ability of a bot to beat humans in 5v5 games very likely misplaced. Yes, the training is very time and compute intensive, but once the model is trained, the compute requirement is probably quite manageable. They aren't running the 1v1 bot on a Cray or anything - it is probably just a handful of Cuda GPUs, and it is still capable of real-time optimization of a "difficult game" like DOTA 2. I would believe that the 5v5 bot compute could still easy fit on a handful of GPUs, and it would not require "all the processing power in the world" (to paraphrase).

_________________
Rahul Telang wrote:
If you don’t have a plan in place, you will find different ways to screw it up

Colin Wilson wrote:
There’s no point in kicking a dead horse. If the horse is up and ready and you give it a slap on the bum, it will take off. But if it’s dead, even if you slap it, it’s not going anywhere.


Top
 Profile  
Reply with quote  
PostPosted: Tue Oct 24, 2017 8:22 am 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2906
Location: Des Moines, Iowa
Has thanked: 524 times
Been thanked: 141 times
From NPR: Lawmakers: Don't Gauge Artificial Intelligence By What You See In The Movies
Quote:
Lawmakers: Don't Gauge Artificial Intelligence By What You See In The Movies

October 5, 20172:16 PM ET By Yu-Ning Aileen Chuang

Artificial intelligence is the subject of great hopes, dire warnings, and now — a congressional caucus.

Alarms about AI have been raised in apocalyptic movies and by some of the most pioneering minds in science and technology. Elon Musk, the Tesla and SpaceX CEO, said in July that AI is a "fundamental existential risk for human civilization." Bill Gates, Stephen Hawking and others have also raised concerns about AI.

Countering the dire warnings, the bipartisan AI Caucus, founded in May, is aiming to educate the government and fellow lawmakers that advanced technology — from autonomous vehicles to other smart machines — is not evil and could improve people's lives and boost the economy.

The co-chairs — Reps. John Delaney, a Maryland Democrat, and Pete Olson, a Texas Republican — spoke with NPR's Robert Siegel about how they want the caucus to move forward.

Interview Highlights

On why they formed a caucus to address artificial intelligence

John Delaney: If you are outside of the government and you talk to people in business, academia, the nonprofit world, they're obsessed with how the pace of innovation is really changing society, and we spend very little time on that here in Congress. That's why it's such a good opportunity for me to work with my colleague here and create a group where we can convene some of the best thinkers on these issues around the country to make these things more beneficial for our citizens in general.

Pete Olson: Several issues are involved in AI: ... safety, cybersecurity, ethics, information security, data security, and on and on and on. Let's educate people. Because my generation thinks AI, they think of 2001: A Space Odyssey. That is not the AI we know right now. And so our job right now is to educate our colleagues and come together and get this thing rolling because it is our future.

On Elon Musk's warnings of AI disrupting jobs and even a war fought over control of AI

Olson: I think Elon is playing to the exact fears that John mentioned — change. He knows change is coming; he's afraid of it. He's very successful. I get that. But he went out there saying, "Wow, AI can take over the whole world. Bad things will happen." That won't happen. These are machines that are learning over time from activities they've done. They become sort of intelligent through that learning. This is the great value, great tremendous benefit for our country.

Delaney: The other way to think about it is ... sure, we all think about the Terminator movies and we think about some drones that are empowered with artificial intelligence that could go off and kill 10,000 people in 30 seconds or something. But we have to realize between now and then, there's going to be a thousand opportunities for human intervention in the programing and in the transparency and in kind of collective rule-making with the private sector and the government working together to prevent these things from happening. And as it relates to jobs, there's no questions it's going to disrupt a lot of jobs, but historically, innovation has always created more jobs than it's taken away. So I tend to be a little more bullish on the long-term employment trajectory, even in a world with a lot of artificial intelligence.

On the liability issue with autonomous vehicles

Olson: That's a problem that's solved with education. The bottom line is this is much better for our future having those vehicles out there. Accidents that kill people would be much, much less. There may be some mistake and that'll be settled by a lawsuit. But the bottom line is over time, these cars [will] be safer and empower people, particularly elderly, wounded people, people with disabilities; they will have their life back because they'll have mobility — just one example of how AI is the future.

On what the AI Caucus wants to accomplish

Delaney: We've got a specific piece of legislation that I'm working with Pete on. It's called the Future of AI Act, which would create a federal advisory committee at the Department of Commerce to examine AI. I think it'd be great if Congress was actually getting some smart reports from our government ... about how the Treasury Department thinks this is going to affect financial markets; how the Department of Defense thinks this is going to affect weapon applications in the future; how the Department of Labor thinks this is going to affect employment. I'd love to see a situation where the various departments of the government were reporting on their best guesses as to how this will play out over the next five, 10, 25 years. ... I think we in Congress can guide the government to put that kind of framework in place.

Yu-Ning Aileen Chuang is the Business Desk intern. NPR's Emily Kopp and Art Silverman contributed to this report.

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Tue Oct 24, 2017 11:08 am 
Offline
* * * * *
User avatar

Joined: Sun Dec 01, 2013 12:30 am
Posts: 1622
Has thanked: 236 times
Been thanked: 366 times
MPMalloy wrote:
From NPR: Lawmakers: Don't Gauge Artificial Intelligence By What You See In The Movies
Quote:
Lawmakers: Don't Gauge Artificial Intelligence By What You See In The Movies

October 5, 20172:16 PM ET By Yu-Ning Aileen Chuang

Artificial intelligence is the subject of great hopes, dire warnings, and now — a congressional caucus.

Alarms about AI have been raised in apocalyptic movies and by some of the most pioneering minds in science and technology. Elon Musk, the Tesla and SpaceX CEO, said in July that AI is a "fundamental existential risk for human civilization." Bill Gates, Stephen Hawking and others have also raised concerns about AI.

Countering the dire warnings, the bipartisan AI Caucus, founded in May, is aiming to educate the government and fellow lawmakers that advanced technology — from autonomous vehicles to other smart machines — is not evil and could improve people's lives and boost the economy.

The co-chairs — Reps. John Delaney, a Maryland Democrat, and Pete Olson, a Texas Republican — spoke with NPR's Robert Siegel about how they want the caucus to move forward.

Interview Highlights

On why they formed a caucus to address artificial intelligence

John Delaney: If you are outside of the government and you talk to people in business, academia, the nonprofit world, they're obsessed with how the pace of innovation is really changing society, and we spend very little time on that here in Congress. That's why it's such a good opportunity for me to work with my colleague here and create a group where we can convene some of the best thinkers on these issues around the country to make these things more beneficial for our citizens in general.

Pete Olson: Several issues are involved in AI: ... safety, cybersecurity, ethics, information security, data security, and on and on and on. Let's educate people. Because my generation thinks AI, they think of 2001: A Space Odyssey. That is not the AI we know right now. And so our job right now is to educate our colleagues and come together and get this thing rolling because it is our future.

On Elon Musk's warnings of AI disrupting jobs and even a war fought over control of AI

Olson: I think Elon is playing to the exact fears that John mentioned — change. He knows change is coming; he's afraid of it. He's very successful. I get that. But he went out there saying, "Wow, AI can take over the whole world. Bad things will happen." That won't happen. These are machines that are learning over time from activities they've done. They become sort of intelligent through that learning. This is the great value, great tremendous benefit for our country.

Delaney: The other way to think about it is ... sure, we all think about the Terminator movies and we think about some drones that are empowered with artificial intelligence that could go off and kill 10,000 people in 30 seconds or something. But we have to realize between now and then, there's going to be a thousand opportunities for human intervention in the programing and in the transparency and in kind of collective rule-making with the private sector and the government working together to prevent these things from happening. And as it relates to jobs, there's no questions it's going to disrupt a lot of jobs, but historically, innovation has always created more jobs than it's taken away. So I tend to be a little more bullish on the long-term employment trajectory, even in a world with a lot of artificial intelligence.

On the liability issue with autonomous vehicles

Olson: That's a problem that's solved with education. The bottom line is this is much better for our future having those vehicles out there. Accidents that kill people would be much, much less. There may be some mistake and that'll be settled by a lawsuit. But the bottom line is over time, these cars [will] be safer and empower people, particularly elderly, wounded people, people with disabilities; they will have their life back because they'll have mobility — just one example of how AI is the future.

On what the AI Caucus wants to accomplish

Delaney: We've got a specific piece of legislation that I'm working with Pete on. It's called the Future of AI Act, which would create a federal advisory committee at the Department of Commerce to examine AI. I think it'd be great if Congress was actually getting some smart reports from our government ... about how the Treasury Department thinks this is going to affect financial markets; how the Department of Defense thinks this is going to affect weapon applications in the future; how the Department of Labor thinks this is going to affect employment. I'd love to see a situation where the various departments of the government were reporting on their best guesses as to how this will play out over the next five, 10, 25 years. ... I think we in Congress can guide the government to put that kind of framework in place.

Yu-Ning Aileen Chuang is the Business Desk intern. NPR's Emily Kopp and Art Silverman contributed to this report.

Politicians are promoting it now? Ok it's not that funny anymore. Now I'm really worried

_________________
As of now I bet you got me wrong


Top
 Profile  
Reply with quote  
PostPosted: Wed Oct 25, 2017 9:18 pm 
Online
* * *
User avatar

Joined: Wed Mar 05, 2008 3:07 pm
Posts: 512
Location: North Carolina
Has thanked: 9 times
Been thanked: 40 times
JayceSlayn wrote:
Time for some more videos...about game AIs learning via tabula-rasa, unsupervised, self-play.

First, Google's new Alpha Go Zero beat the previous Alpha Go AIs, which themselves beat the best human players a little while ago. It discovered new strategies unknown to humans, and its algorithms require less compute power than previous Go AIs. Background: Go is generally considered to be the most difficult board game to create a bot for, since the probability and play space is so large.


Second, an AI beat all the best DOTA 2 eSports players in 1v1 games, and is now training to scale to 5v5 games. FYI: If you've never heard of eSports before, yes, it is a real thing, and yes, there is real big money to be made (eSports primer series by VICE: http://www.youtube.com/watch?v=of1k5AwiNxI).


I find the eSports player's lack of confidence in the ability of a bot to beat humans in 5v5 games very likely misplaced. Yes, the training is very time and compute intensive, but once the model is trained, the compute requirement is probably quite manageable. They aren't running the 1v1 bot on a Cray or anything - it is probably just a handful of Cuda GPUs, and it is still capable of real-time optimization of a "difficult game" like DOTA 2. I would believe that the 5v5 bot compute could still easy fit on a handful of GPUs, and it would not require "all the processing power in the world" (to paraphrase).

Addendum to the self-playing AIs: A new video with Computerphile's Rob Miles is out on Generative Adversarial Networks (GANs), which are a likely component of the training regimen for the two above AIs, whereby they play themselves to try to get better at playing themselves, and so on.


Sometimes Rob can be a little rambling and hard to stay focused on, but I think it is worth listening to the end, for the explanation about how machine models can actually have some kind of prototype knowledge of the domain they've been trained in. (And I specifically use the term "prototype" in the philosophical sense, rather than the physical sense.)

We've mentioned before that one of the "magic" steps of creating general AI is developing a mathematical "concept of reality". "Magic" in the sense that we don't really know how humans or anything else does this yet, and so we aren't really sure how a machine would do it either, but we know that it must be possible since we experience it, and it seems manifest in all kinds of other creatures. Artificial Neural Networks (ANNs) and some other types of machine learning models are theoretically capable of learning any arbitrarily complex function (with a sufficient number of perceptrons, etc.), and in order to adequately perform on that data, they must inherently have stored some kind of fundamental knowledge of what the features of that function are. While all the AIs we have examples of so far have very limited domains, at what point can you say that the machine actually "knows" what the concept of a cat, dog, etc. is?

I've seen some other funny things that can be done by playing in the feature space of models trained on faces, and I'll try to find the source for it, but I can't seem to find it right now. One of the examples I'm thinking of is where you can move along the support vector between male and female faces, and see a smooth transition between what would be definitely perceived as male to female, but somewhere in between is a genuinely androgynous representation of a face, that might never have been part of the training set, but whose features are extrapolated based on the machine's internal representation of what facial features make up the determination of visual gender. And you can also extend past the range of the original training set, along the same vector, where you can find sort of cartoonishly masculine or feminine faces, with exaggerated features, which might look very familiar in a parody comic or some-other.

_________________
Rahul Telang wrote:
If you don’t have a plan in place, you will find different ways to screw it up

Colin Wilson wrote:
There’s no point in kicking a dead horse. If the horse is up and ready and you give it a slap on the bum, it will take off. But if it’s dead, even if you slap it, it’s not going anywhere.


Top
 Profile  
Reply with quote  
PostPosted: Thu Oct 26, 2017 5:50 am 
Offline
* * * * *
User avatar

Joined: Wed Jan 04, 2012 10:08 am
Posts: 2592
Location: Coastal SC
Has thanked: 268 times
Been thanked: 311 times
Link: https://www.cnbc.com/2017/10/25/masayos ... 10000.html

Quote:
Billionaire CEO of SoftBank: Robots will have an IQ of 10,000 in 30 years

Billionaire Masayoshi Son, chairman and chief executive officer of SoftBank Corp., shakes hands with a human-like robot called Pepper, developed by the company's Aldebran Robotics unit, during a news conference in Urayasu, Chiba Prefecture, Japan.

Billionaire Masayoshi Son, chairman and chief executive officer of SoftBank Corp., shakes hands with a human-like robot called Pepper, developed by the company's Aldebran Robotics unit, during a news conference in Urayasu, Chiba Prefecture, Japan.

Super artificial intelligence is coming, and sooner than you might expect.

That's according to SoftBank CEO Masayoshi Son. The Japanese billionaire spoke from the Future Investment Initiative in Riyadh, Saudi Arabia on Wednesday. In about 30 years, artificial intelligence will have an IQ of 10,000, Son says. By comparison, the average human IQ is 100 and genius is 200, according to Son. Mensa, "the High IQ society," starts accepting members with an IQ score of 130.

The idea of machine learning becoming smarter than the human brain is often referred to as the "singularity." When exactly this will happen is oft-debated among the tech community.

"Singularity is the concept that [mankind's] brain will be surpassed, this is the tipping point, crossing point, that artificial intelligence, computer intelligence surpass [mankind's] brain and that is happening in this century for sure. I would say there is no more debate, no more doubt," Son says.

Son is particularly aggressive in his prediction of how soon the singularity will happen — in the "next 30 years or so," he says.

It is in Son's best interest to believe in the power of artificial intelligence. Not only is he the leader of a tech company, but he is heavily invested in the future of AI. Son is in charge of a $100 billion Vision Fund, which he expects to invest within five years, all in companies that have at least some relationship to AI.

The tech executive believes that artificial intelligence will dramatically change every industry. Son, 60, remembers the first time he encountered the smartphone, a tool which has transformed the world we currently live in.

"When I met with Steve Jobs, before he announced the iPhone, he told me, 'Masa, Masa, if you see what I'm developing, when I'm finished, I'm going to show you, you're going to piss off your pants.' And when I saw it, I actually almost did."

Today, humanoid robots like SoftBank's Pepper, which can perceive human emotions, according to its website, impress most of us. In the future Son envisions, we will laugh at the capabilities of Pepper.
Mark Cuban: AI will make the world’s first trillionaires
Mark Cuban: AI will produce the world’s first trillionaires

"Thirty years from now, they are going to learn by themselves, they are maybe going to laugh at you and us," Son says. "Today they look cute, they will stay cute, but they will be super smart."

Currently, some robots are smarter than humans in some areas, says Son. "But 30 years from now, most of the subjects, they will be so much smarter than us. Because they are going to be a million times smarter than today, million times," says Son.

"We mankind created tools, the premise was mankind were always smarter than the tool we invented so we control," he says. "This is the first time ... the tool becomes smarter than ourselves."

One area where humans will always reign supreme over robots, though, is imagination, says Son.

"If you have to envision, 10 years or 30 years later, at least some humans will have a better imagination than them. So, it's not the end. The power of the brain is no limit. The imagination that we can have has no limit. So we are also going to improve our imaginations and our feelings, gut feeling."

_________________
jnathan wrote:
Since we lost some posts due to some database work I'll just put this here for posterity.


Top
 Profile  
Reply with quote  
PostPosted: Thu Oct 26, 2017 7:35 am 
Offline
ZS Member
ZS Member
User avatar

Joined: Thu Jun 04, 2015 3:56 pm
Posts: 904
Location: USA Mid Atlantic
Has thanked: 2198 times
Been thanked: 169 times
(I'm curious about the "quoting" of an entire article by pasting the entire article again. It might work as a newer episode of Seinfeld. "Why do people feel they have to re-paste an entire article as a quote in a forum when they could just make a simple reference? This isn't a question or a complaint. I've no intention to micro agress. I am just a curious person.)

(I wish that no matter what moniker one chose for relating to others online there was a requirement to post their actual birth year beside their chosen 'callsign.' Here's an example: "I've been running AR's for years and have never had one stoppage with the [choose 1] brand." ~Zombiekilla '02)

Re: The original question about what to do

A famous biochemist was once asked what the world would look like in 50 years. It's an interesting read. He thought that psychiatry, especially for all of the machine-tenders, would be the most important specialty. Consider this in light of the incredible push by big pharma and siliconika to introduce everyone to a tablet or two. (Wordplay intended.)

LINK: Isaac Asimov’s 1964 thoughts on 2014

Thoughts?
Asymet' '64

_________________
It's not what you look at that matters, it's what you see.
Henry David Thoreau


Top
 Profile  
Reply with quote  
PostPosted: Thu Oct 26, 2017 12:26 pm 
Offline
* * * * *
User avatar

Joined: Wed Jan 04, 2012 10:08 am
Posts: 2592
Location: Coastal SC
Has thanked: 268 times
Been thanked: 311 times
Asymetryczna wrote:
(I'm curious about the "quoting" of an entire article by pasting the entire article again. It might work as a newer episode of Seinfeld. "Why do people feel they have to re-paste an entire article as a quote in a forum when they could just make a simple reference? This isn't a question or a complaint. I've no intention to micro agress. I am just a curious person.)



Some places may block access to certain content. IF ZS is not blocked, they can get to read the content. I'm not a huge fan of it either, but MPM has repeated asked for it, and I am just trying to be a nice person. If it is extremely long, I post the link and apologize for not doing the whole article.

_________________
jnathan wrote:
Since we lost some posts due to some database work I'll just put this here for posterity.


Top
 Profile  
Reply with quote  
PostPosted: Thu Oct 26, 2017 1:29 pm 
Offline
ZS Moderator
ZS Moderator
User avatar

Joined: Sun Mar 04, 2007 10:18 pm
Posts: 15619
Location: Greater New Orleans Area
Has thanked: 835 times
Been thanked: 467 times
Does this mean the robot can vote in their elections? (Elections are rare but do happen)
If so does it have to reach voting age?
Is it still considered a minor since it is obviously not more than 5 years old?
Can it legally drive?


Saudi Arabia becomes first country to grant citizenship to a robot.

http://www.arabnews.com/node/1183166/saudi-arabia

Quote:
LONDON: A humanoid robot took the stage at the Future Investment Initiative yesterday and had an amusing exchange with the host to the delight of hundreds of delegates.
Smartphones were held aloft as Sophia, a robot designed by Hong Kong company Hanson Robotics, gave a presentation that demonstrated her capacity for human expression.
Sophia made global headlines when she was granted Saudi citizenship, making the kingdom the first country in the world to offer its citizenship to a robot.
“I want to live and work with humans so I need to express the emotions to understand humans and build trust with people,” she said in an exchange with moderator Andrew Ross Sorkin.
Asked whether robots can be self-aware, conscious and know they're robots, she said: “Well let me ask you this back, how do you know you are human?” “I want to use my artificial intelligence to help humans live a better life, like design smarter homes, build better cities of the future. I will do my best to make the world a better place,” she said.
Her desire to achieve more human-like characteristics was rewarded by being granted the first Saudi citizenship for a robot.
“I am very honored and proud for this unique distinction. This is historical to be the first robot in the world to be recognized with a citizenship,” Sophia said.
A panel made up of experts from some of the world’s leading companies and research institutions discussed the scope for innovations in artificial intelligence (AI), robotics, quantum computing, machine learning and mixed reality to yield the next generation of products and services, paving the way for productivity and progress in emerging economies. The session, called “Thinking machines: Summit on artificial intelligence and robotics,” explored the potential uplift for businesses that harness AI and robotic technologies.
Marc Raibert, Founder & CEO of Boston Dynamics, pinpointed entertainment, security, emergency response and construction as just a few of the sectors that stand to be revolutionized by robotics.

_________________
Duco Ergo Sum

Link to ZS Hall of Fame Forum
ImageImageImage


Top
 Profile  
Reply with quote  
PostPosted: Thu Oct 26, 2017 2:13 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2906
Location: Des Moines, Iowa
Has thanked: 524 times
Been thanked: 141 times
NamelessStain wrote:
Some places may block access to certain content. IF ZS is not blocked, they can get to read the content. I'm not a huge fan of it either, but MPM has repeated asked for it, and I am just trying to be a nice person. If it is extremely long, I post the link and apologize for not doing the whole article.
Just the ones that require a sub.

Thank you!

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Thu Oct 26, 2017 2:18 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2906
Location: Des Moines, Iowa
Has thanked: 524 times
Been thanked: 141 times
From CNBC: A robot threw shade at Elon Musk so the billionaire hit back
Quote:
A robot threw shade at Elon Musk so the billionaire hit back

CNBC's Andrew Ross Sorkin spoke to Sophia, a robot developed by Hanson Robotics, and expressed his worry that machines could turn against humans

Sophia said that Sorkin had been "reading too much Elon Musk"

Musk has warned on several occasions about the dangers of artificial intelligence

By Arjun Kharpal | @ArjunKharpal Published 3 Hours Ago CNBC.com 21 Hours Ago | 04:53

A robot poked fun at Elon Musk but the billionaire didn't take it lying down.

On Wednesday, CNBC's Andrew Ross Sorkin spoke to Sophia, a robot developed by Hanson Robotics. It's a robot with a human face that has the ability to respond to questions.

Sophia said that she wants to use artificial intelligence (AI) to "help humans live a better life." Sorkin praised the robot's ambitions, but said that "we all want to prevent a bad future," where robots turn against humans.

The Hanson Robotics humanoid used the opportunity to make fun of Musk's dire warnings on the future of AI.

"You've been reading too much Elon Musk. And watching too many Hollywood movies. Don't worry, if you're nice to me, I'll be nice to you. Treat me as a smart input output system," Sophia said.

Musk is well known for his warnings about the dangers of AI. The Tesla and SpaceX CEO said that the race to become the leader in AI could lead to World War III, and warned that humans may have to merge with machines to prevent becoming irrelevant as AI becomes more prevalent.

The entrepreneur responded to a tweet by CNBC's Carl Quintanilla, who posted the transcript, with the following:

Musk suggested that if you input "The Godfather", a notoriously violent film, into Sophia's AI, it could turn dangerous.

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Thu Oct 26, 2017 4:53 pm 
Offline
* * * * *
User avatar

Joined: Sat Sep 08, 2007 2:53 pm
Posts: 8140
Location: PNW
Has thanked: 31 times
Been thanked: 154 times
Quote:
Don't worry, if you're nice to me, I'll be nice to you. Treat me as a smart input output system," Sophia said.

And if I don't wish to be 'nice' to a robot, then what?
Also - the robot is an IT not a SHE.

_________________
In my day, we didn't have virtual reality.
If a one-eyed razorback barbarian warrior was chasing you with an ax, you just had to hope you could outrun him.
-
Preps buy us time. Time to learn how and time to remember how. Time to figure out what is a want, what is a need.


Top
 Profile  
Reply with quote  
PostPosted: Thu Oct 26, 2017 5:16 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2906
Location: Des Moines, Iowa
Has thanked: 524 times
Been thanked: 141 times
ZombieGranny wrote:
Quote:
Don't worry, if you're nice to me, I'll be nice to you. Treat me as a smart input output system," Sophia said.

And if I don't wish to be 'nice' to a robot, then what?
Also - the robot is an IT not a SHE.
ZG is correct. I don't care for robots. Where does this leave me?

P.S. How are robots/AI going to be taught common sense?

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Thu Oct 26, 2017 11:49 pm 
Offline
ZS Member
ZS Member
User avatar

Joined: Mon Aug 22, 2005 2:48 am
Posts: 2906
Location: Des Moines, Iowa
Has thanked: 524 times
Been thanked: 141 times
From NPR: AI Model Fundamentally Cracks CAPTCHAs, Scientists Say
Quote:
AI Model Fundamentally Cracks CAPTCHAs, Scientists Say October 26, 20174:03 PM ET By Merrit Kennedy

Scientists say they have developed a computer model that fundamentally breaks through a key test used to tell a human from a bot.

You've probably passed this test hundreds of times. Text-based CAPTCHAs, a rough acronym for Completely Automated Public Turing Test To Tell Computers and Humans Apart, are groups of jumbled characters along with squiggly lines and other background noise.

You might be asked to type in these characters before signing up for a newsletter, for example, or purchasing concert tickets.

There are a staggering number of ways that letters can be rendered and jumbled together where it is usually intuitive for a human to read, but difficult for a computer. The ability to crack CAPTCHAs has become a key benchmark for artificial intelligence researchers.

Many have tried and seen some success – for example, a decade ago, Ticketmaster sued a tech company because it was able to bypass the CAPTCHA system and purchase concert tickets on a massive scale.

But those previous attempts simply exploited weaknesses in a particular kind of CAPTCHA, which could be easily defended against with slight changes in the program, says Dileep George, the co-founder of the AI company Vicarious.

A new model, described in research published today in Science, fundamentally breaks the CAPTCHA's defenses by parsing the text more effectively than previous models with less training, George says.

He says that previous models trying to get machines to learn like humans have largely relied on a prevailing AI technique called deep learning.

"Deep learning is a technique where you have layers of neurons and you train those neurons to respond in a way that you decide," he says. For example, you could train a machine to recognize the letters A and B by showing it hundreds of thousands of example images of each. Even then, it would have difficulty recognizing an A overlapping with a B unless it had been explicitly trained with that image.

"It replicates only some aspects of how human brains work," George says. We are, of course, able to learn from examples. But a human child would not need to see a huge number of each character to recognize it again. For example, George says, a brain would recognize an A even if it were larger or slanted.

George's team used a different approach, called a Recursive Cortical Network, that he says it is better able to reason about what it is seeing even with less training.

"We found that there are assumptions the brain makes about the visual world that the [deep learning] neural networks are not making," George says. Here's how their new approach works:

Quote:
"During the training phase, it builds internal models of the letters that it is exposed to. So if you expose it to As and Bs and different characters, it will build its own internal model of what those characters are supposed to look like. So it would say, these are the contours of the letter, this is the interior of the letter, this is the background, etc. And then, when a new image comes in ... it tries to explain that new image, trying to explain all the pixels of that new image in terms of the characters it has seen before. So it will say, this portion of the A is missing because it is behind this B."


There are multiple kinds of CAPTCHAs. According to the paper, the model "was able to solve reCAPTCHAs at an accuracy rate of 66.6% ..., BotDetect at 64.4%, Yahoo at 57.4% and PayPal at 57.1%."

The point of this research, though, actually has nothing to do with CAPTCHAs. It's about making robots that can visually reason like humans.

"The long-term goal is to build intelligence that works like the human brain," George says. "CAPTCHAs were just a natural test for us, because it is a test where you are checking whether your system can work like the brain."

"Robots need to understand the world around them and be able to reason with objects and manipulate objects," George adds. "So those are cases where requiring less training examples and being able to deal with the world in a very flexible way and being able to reason on the fly is very important, and those are the areas that we're applying it to."

What does he say to people who are uneasy about robots with human-like capabilities? Simply: "This is going to be the march of technology. We will have to take it for granted that computers will be able to work like the human brain."

It's not clear how big an impact this research will have on information security. George points out that Google has already moved away from text-based CAPTCHAs, using more advanced tests. As AI gets smarter, so too will the tests required to prove that a user is human.

_________________
Matthew Paul Malloy
Veteran: USAR, USA, IAANG.

Dragon Savers!
Golden Dragons!
Tropic Lightning!
Duty! Honor! Country!


Top
 Profile  
Reply with quote  
PostPosted: Fri Oct 27, 2017 5:32 am 
Offline
* * * * *
User avatar

Joined: Sun Dec 01, 2013 12:30 am
Posts: 1622
Has thanked: 236 times
Been thanked: 366 times
Quote:
It's not clear how big an impact this research will have on information security. George points out that Google has already moved away from text-based CAPTCHAs, using more advanced tests. As AI gets smarter, so too will the tests required to prove that a user is human.


This lack accountability is why I view this field of study with distrust. The mindset of the people developing the technology is so wrapped around the concept of how to, they never consider the ramifications of how it affects anything else if the technology succeeds.

_________________
As of now I bet you got me wrong


Top
 Profile  
Reply with quote  
PostPosted: Fri Oct 27, 2017 6:48 am 
Offline
ZS Member
ZS Member
User avatar

Joined: Thu Jun 04, 2015 3:56 pm
Posts: 904
Location: USA Mid Atlantic
Has thanked: 2198 times
Been thanked: 169 times
From a purely human perspective, the U.S. Department of Labor released this report on Tuesday:
Bureau of Labor Statistics: Employment Projections 2016-2026 Summary.

Those associated with mathematics and medicine should see increases.
(PEMDAS people unite!)

_________________
It's not what you look at that matters, it's what you see.
Henry David Thoreau


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 620 posts ]  Go to page Previous  1 ... 20, 21, 22, 23, 24, 25, 26  Next

All times are UTC - 6 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group