Artificial intelligence (AI) is rapidly advancing, and its potential impact on society is growing. From autonomous vehicles to intelligent personal assistants, AI is transforming many industries and changing the way we live our lives. But with this great power comes great responsibility, and there are many ethical questions that arise as we create intelligent machines and robots.
In this article, we explore the complex ethical implications of AI and robotics and the challenges that arise as these technologies continue to evolve. We examine the potential benefits and risks of AI, including issues of bias, transparency, privacy, and accountability. We also delve into the philosophical and moral questions that AI raises, such as the nature of consciousness, free will, and the value of human life.
Bias Data = Bias AI
One of the key ethical issues in AI is the potential for bias. AI systems are trained on data, and if that data is biased, the AI system will be biased as well. This can lead to discrimination and inequality, particularly in areas such as hiring, lending, and criminal justice. To address this issue, it is essential that AI systems are designed to be transparent and accountable, and that data sets are diverse and representative.
Privacy
Privacy is another important ethical issue in AI. As AI systems collect and analyze vast amounts of personal data, there is a risk that individuals’ privacy will be compromised. It is essential that AI systems are designed to protect individuals’ privacy and that data is only collected and used for legitimate purposes.
Humans vs Robots
The development of AI also raises philosophical and moral questions. For example, as AI becomes more intelligent, there is a risk that it will surpass human intelligence and potentially even become conscious. This raises questions about the nature of consciousness and the value of human life.
Solution
To ensure that AI is developed and deployed in an ethical and responsible manner, it is essential that we have a comprehensive and transparent ethical framework in place. This framework should include principles such as transparency, accountability, and privacy protection. We all have a role to play in shaping the development and use of AI, and it is essential that we work together to address the ethical implications of this powerful technology.
Summary
In conclusion, the ethical implications of AI are complex and multifaceted. It is essential that we take a comprehensive and transparent approach to addressing these issues. We need to ensure that AI is developed and used in an ethical and responsible manner. By doing so, AI can have a positive impact on society and help to create a better future for us all.
Hope you enjoyed this blog post and found it insightful. Don’t forget to leave a comment.
Feel free to contact me on Instagram and Twitter with any questions or just to say hi.
It has been the idea that the government has been manipulating the media to push a narrative. A narrative that is approved by the White House. Suggesting that the government is suppressing our voices and opinion would be considered a result of paranoia and delusion.
Why? Because that would be a direct violation of our first amendment right.
However, with the latest release of “The Twitter Files” we see that the government is asking for censorship on tech platforms. No longer is this speculation but the proof is shared by the new CEO of Twitter, Elon Musk.
No longer is it speculation that the government is working with tech companies to censor content being shown to us. We now need to take this into account before reading content online and considering it to be fact.
Stay curious but remain cautious.
Hope you enjoyed this blog post and found it insightful.
Social media platforms “oil” is this data tracking your trends and habits on their platform which is then sold to Data Brokers that then use Data science to build models on how to suggest various information to users like you. Data brokers then sell the access to their trend analysis of this data sourced from social media and other internet entities so that they can now target you.
Social media, the digital space you spend the most time, once again gets paid to run these ads targeting you.
Allegations have been made that Data brokers such as Cambridge Analytica was used by politicians to target voters and sway their opinion come Election Day. The idea is that if you were found to be someone that found the topic of gun control to be a deciding factor on who you would vote for candidates would then be able to target you with their “promise” of making gun control more or less strict depending on your views on the topic which was analyzed by companies like Cambridge analytic were able to decipher.
Google Detects Earthquakes with Users Android phones Did you know that in the background your Android phone is part of the largest Earthquake detection system ever built. Google is using users phone sensor data in combination with the USGS Shake Alert system to track alert users of potential earthquake disasters. Is this a violation of your privacy? Is this okay since it may help save your life?
Apple’s New Satellite Service launched today
1. Apple involvement with satellite communication company
In Apples latest service launch they teamed up with Satellite company Globalstar (NYSE: GSAT) and Cobham Satcom to provide connectivity to and from iPhone 14 and iPhone 14 Pro devices.
“A $450 million investment from Apple’s Advanced Manufacturing Fund provides the critical infrastructure that supports Emergency SOS via satellite for iPhone 14 models. Available to customers in the US and Canada beginning later this month, the new service will allow iPhone 14 and iPhone 14 Pro models to connect directly to a satellite, enabling messaging with emergency services when outside of cellular and Wi-Fi coverage.”
"In 2021, Apple announced an acceleration in its US investments, with plans to make new contributions of more than $430 billion over a five-year period."
2. How the SOS system works
"When an iPhone user makes an Emergency SOS via satellite request, the message is received by one of Globalstar’s 24 satellites in low-earth orbit traveling at speeds of approximately 16,000 mph. The satellite then sends the message down to custom ground stations located at key points all over the world."
"The ground stations use new high-power antennas designed and manufactured specifically for Apple by Cobham Satcom in Concord, California. Cobham’s employees engineer and manufacture the high-powered antennas, which will receive signals transmitted by the satellite constellation. Along with communicating via text with emergency services, iPhone users can launch their Find My app and share their location via satellite when there is no cellular and Wi-Fi connection, providing a sense of security when off the typical communications grid."
"Once received by a ground station, the message is routed to emergency services that can dispatch help, or a relay center with Apple-trained emergency specialists if local emergency services cannot receive text messages."
3. Where is the SOS system available
Emergency SOS via satellite is available in the US and Canada starting today, November 15, and will come to France, Germany, Ireland, and the UK in December.
Links:
https://www.apple.com/newsroom/2022/11/emergency-sos-via-satellite-made-possible-by-450m-apple-investment/
https://www.apple.com/newsroom/2022/11/emergency-sos-via-satellite-available-today-on-iphone-14-lineup/
https://support.apple.com/en-us/HT213426
https://ast-science.com
https://investors.globalstar.com
Things discussed in the first pod is AI, our personal data, self driving cars. These ultimately all are future technologies that will integrate more into our lives but we need to be mindful of the ethical challenges that we will face as we are exposed to these algorithms.
The company behind the popular dating/social app, Bumble, have released an open-source project to detect mens penis’s that are being sent to the DMs. They have released this open-source project to help combat sexual harassment that run amuck in the this filthy digital space.
Now I wonder how they were able to obtain their training dataset for the Artificial Intelligence to learn from? How did they obtain consent of users to have their sexual organ saved and used by a company to build an application with? Does the machine doing the machine learning get harassed in the process?
“Buzz, buzz buzz” my phone vibrates as I receive yet another notification today.
How long do I wait before I look?
What could it be?
The notification could’ve been triggered by any of the plethora of applications I have installed on my phone.
The tension builds and I begin to feel anxious like a child on Christmas Eve waiting for the clock to strike midnight. Now, my smartwatch heart rate monitor is triggered and warns me of having and elevated heart rate.
So, I give in.
I reach for the phone and the enchanting black mirror illuminates to life. I see the message it had waiting for me, which says, “Rain expected in 20 minutes at your location.”
-_-
The buzzing and humming of our phone occurs multiple times a day, and it has gradually become the norm. We’ve become so dependent on our electronics and their algorithms while placing a blinding trust in their utility. However, let’s not forget these 4 times algorithms f*cked humanity.
Y2K Bug
“The Commerce Department’s $100 billion estimate covers the cost of testing and repairing computers affected by the Y2K problem from 1995 to 2001. It does not, however, include the money that businesses have spent on new machines to replace older ones that have date glitches, which economists say could provide some long-term economic benefits through productivity gains.
The Commerce estimate also doesn’t take into account firms’ Y2K-related publicity campaigns or the possible cost of litigation stemming from undiscovered glitches. As a result, some economists believe overall Y2K spending probably is somewhat higher, perhaps closer to $150 billion.”
To save some bits of data, some software engineers simply used the last two digits to denote the year. This forced companies to retroactively review code used in production to implement fixes to resolve issues caused by the missing two leading digits of precision in the year.
Overall, the y2k fix was needed to prevent issues that would come up in applications such as scheduling, trend analysis and time-based calculations. For example, banking applications that calculate interest on your bank account or tap into your 401k with or without penalty.
“We found that single-vehicle collisions comprised the majority of crashes (51 cases, 32% of overall incidents), with crashes with other vehicles (26 cases, 17%) and crashes with pedestrians and bikes (13 cases, 8%) making up the remainder of crash incidents.”
Lin, Allen & Kuehl, Kate & Schöning, Johannes & Hecht, Brent. (2017). Understanding “ Death by GPS “ : A Systematic Analysis of Catastrophic Incidents Associated with Personal Navigation Technologies. 10.1145/3025453.3025737.
People who drive to places that are not familiar to them trust in GPS algorithms to get them to their dream destinations, but end up in nightmare situations. Traveling the great unknown is something humanity has been doing for millions of years using primitive methods such as following the North Star to modern day GPS applications. However, GPS has made traveling so much easier that we sometime take for granted the complexity involved in finding a path to our destination, but is it the safest?
One unfortunate fatal account comes from a Canadian couple in 2011 looking to take a road trip from their home in British Columbia to Las Vegas, which ended in tragedy because they depended on GPS to help navigate their trip to Sin City. Sadly, the couple took a route through the desert, which the GPS suggested was the best route, and ended up getting stuck in thick mud from the rough terrain they encountered along the way.
The husband, Albert Chretien (59), left his wife, Rita Chretien (56), in the vehicle while he went to search for help to get their vehicle out of the thick mud that they were unable to get themselves out of. Rita was found 7 weeks later in the vehicle, 30 pounds lighter, was rushed to the nearest hospital and was able to recover fully. However, Rita’s husband, Albert, was found dead about a year later by hunters.
“Krafcik (John Krafcik, CEO of Waymo (owned by Google’s parent)) said that you have to be sensitive to the real losses people suffer in accidents such as these. But, he added, achieving the bigger picture — eliminating the roughly 35,000 annual auto fatalities, largely due to driver error — means not being deterred by the “bumps in the road” to accident-free driving. Krafcik was basically putting a fresh coat of paint on an old, rarely spoken platitude: People must get killed en route to a better, safer transportation system.”
Korman, Richard. “Give Us the Risk Reality.” ENR: Engineering News-Record, Aug. 2018, p. 52.
The future certainly is here and the wealthiest man in the world now owns one of the trail blazing companies introducing “autonomous” machines to our roads. However, with “beta” version software being deployed in the Tesla Auto-pilot, one has to question the transparency to consumers and general public of the risks involved in using these algorithms that are controlling 2 ton metal machines armed with li-ion batteries on wheels doing 70 MPH on a road near you. It’s a marketing genius but one which puts people in danger by misleading customers by stating a Tesla is a “self-driving” vehicle. According to the 5 level classification of autonomous vehicles where 0 is lowest level and 5 being full autonomy; a Tesla, at best, is level 2.
With the slick marketing, a Tesla customer was unfortunately too reliant on these “self-driving” features and ended in a fatal accident with a truck.
“According to internal materials reviewed by The Intercept, Dataminr meticulously tracked not only ongoing protests, but kept comprehensive records of upcoming anti-police violence rallies in cities across the country to help its staff organize their monitoring efforts, including events’ expected time and starting location within those cities. A protest schedule seen by The Intercept shows Dataminr was explicitly surveilling dozens of protests big and small, from Detroit and Brooklyn to York, Pennsylvania, and Hampton Roads, Virginia.”
Biddle, Sam. “Police Surveilled George Floyd Protests With Help From Twitter-Affiliated Startup Dataminr.” The Intercept, 9 July 2020, theintercept.com/2020/07/09/twitter-dataminr-police-spy-surveillance-black-lives-matter-protests.
What should we do when we witness injustice? Usually, we think of spreading information, organizing and protesting. However, since we are using the World Wide Web, the powers that be also have access to your ideas that you are spreading about. Now, with social media platforms already tracking and cataloging your data, it made it much easier to suppress activism. For instance, there is a company, Dataminr, that reportedly tracked BLM activist on social media and was helping the police now of these activist planned activities.
These four events have occurred, yet, we are still alive and breathing. However, we should reflect on these events so that we learn not to have blind faith in applications or their creators. We trust these applications we use that we often neglect to read the terms and agreements. We are so quick to click the “I Agree” checkbox and “Next” button to finally have access to the latest application everyone is buzzing about, but don’t know what we traded to gain access to this application. The old adage goes “if something is too good to be true that’s because it probably is”, yet we haven’t applied it to the technology we use.
Life is more complex than what any algorithm can currently predict or handle. We, the human element, are the counter balance to make sure that technology and technological companies do not make life-altering decisions without our final say. Using our empathy and compassion, which is what separates us from machines, we will need to provide the checks and balances necessary to prevent questionable use of these algorithms that are so integrated in our lives. I ask that my fellow software engineers and the tech companies that employ us be more transparent with AI being deployed and used on society, so that the public gives their own risk assessment on the decisions being made by AI potentially based off of training data sets that may be bias or skewed.