The Ethics of Artificial Intelligence: Exploring the Ethical Implications of Creating Intelligent Machines and Robots

Artificial intelligence (AI) is rapidly advancing, and its potential impact on society is growing. From autonomous vehicles to intelligent personal assistants, AI is transforming many industries and changing the way we live our lives. But with this great power comes great responsibility, and there are many ethical questions that arise as we create intelligent machines and robots.


In this article, we explore the complex ethical implications of AI and robotics and the challenges that arise as these technologies continue to evolve. We examine the potential benefits and risks of AI, including issues of bias, transparency, privacy, and accountability. We also delve into the philosophical and moral questions that AI raises, such as the nature of consciousness, free will, and the value of human life.

Bias Data = Bias AI

One of the key ethical issues in AI is the potential for bias. AI systems are trained on data, and if that data is biased, the AI system will be biased as well. This can lead to discrimination and inequality, particularly in areas such as hiring, lending, and criminal justice. To address this issue, it is essential that AI systems are designed to be transparent and accountable, and that data sets are diverse and representative.

Privacy

Privacy is another important ethical issue in AI. As AI systems collect and analyze vast amounts of personal data, there is a risk that individuals’ privacy will be compromised. It is essential that AI systems are designed to protect individuals’ privacy and that data is only collected and used for legitimate purposes.

Humans vs Robots

The development of AI also raises philosophical and moral questions. For example, as AI becomes more intelligent, there is a risk that it will surpass human intelligence and potentially even become conscious. This raises questions about the nature of consciousness and the value of human life.

Solution

To ensure that AI is developed and deployed in an ethical and responsible manner, it is essential that we have a comprehensive and transparent ethical framework in place. This framework should include principles such as transparency, accountability, and privacy protection. We all have a role to play in shaping the development and use of AI, and it is essential that we work together to address the ethical implications of this powerful technology.

Summary

In conclusion, the ethical implications of AI are complex and multifaceted. It is essential that we take a comprehensive and transparent approach to addressing these issues. We need to ensure that AI is developed and used in an ethical and responsible manner. By doing so, AI can have a positive impact on society and help to create a better future for us all.


Hope you enjoyed this blog post and found it insightful. Don’t forget to leave a comment.

Feel free to contact me on Instagram and Twitter with any questions or just to say hi.

Check out https://metrocoderlog4j.com for tech news and more.

5 AI tools for Content Creators

DALL-E 2

Image generation tool DALL-E 2

Creating images from your imagination used to require dedicating hours mastering artistic skills. However, now it just requires you to type what your vision is and submit it to the AI tool, DALL-E 2, to generate your ideas. It’s now that simple.

Don’t want to just copy what DALL-E 2 creates; fine. Use it as inspiration to create something of your own.

The point is lets this awesome AI tool help you create your next amazing art piece.

Astria

Image generation and augmentation tool Astria

Here is another AI image generation tool that is slightly different from DALL-E 2. Astria takes an existing image and will then provide you with different artist representation of this image. You simply upload a series of images of a product or a person, the AI begins to build models of these images, then uses the models to generate mind blowing content.

This tool is great if you want to showcase a product in different scenarios and backgrounds. All this without having mastered photoshop.

Jasper

Text generation tool

This next tool, Jasper, does all the writing for you. Yes, literally. Simply input some text to help the AI understand what content you want it to generate and next it spits out content in whatever format or platform you choose. From sales copy to social media posts. Jasper can generate it for you.

This is a great tool to help boost your productivity in generating content for your audience.

Magic Eraser

Image editing tool Magic Eraser

Did you just break up with someone but took some cool photos on vacation that you now have to delete of social media. Well, you no longer have to with Magic Eraser. This tool makes it simple for you to highlight what object you want out of an image and, like magic, AI will generate a new image without that person or object in the image.

So no more need to take down those images off social media just because your ex was in them. lol

Talk to Books

Book search engine Talk to Book by Google

If you are looking for some new book to read or some reference material for your next paper look no further than Googles new tool, Talk to books. Simply enter in some text that describes what you are looking for in a book and google will generate a list of books that cover that topic you are looking to learn more about.


Hope you enjoyed this blog post and found it helpful for your next project.

Check out https://metrocoderlog4j.com for tech news and more.

Podcast – Midterm Election Fuel Misinformation Fears of the Past

Social media platforms “oil” is this data tracking your trends and habits on their platform which is then sold to Data Brokers that then use Data science to build models on how to suggest various information to users like you. Data brokers then sell the access to their trend analysis of this data sourced from social media and other internet entities so that they can now target you.

Social media, the digital space you spend the most time, once again gets paid to run these ads targeting you.

Allegations have been made that Data brokers such as Cambridge Analytica was used by politicians to target voters and sway their opinion come Election Day. The idea is that if you were found to be someone that found the topic of gun control to be a deciding factor on who you would vote for candidates would then be able to target you with their “promise” of making gun control more or less strict depending on your views on the topic which was analyzed by companies like Cambridge analytic were able to decipher.

Cited Sources: 

Money spent in the current midterm election: https://www.opensecrets.org/news/2022/11/total-cost-of-2022-state-and-federal-elections-projected-to-exceed-16-7-billion/

FCC TikTok ban: https://nypost.com/2022/11/01/us-government-should-ban-tiktok-fcc-commissioner-brendan-carr-says/

TikTok Class action lawsuit: https://www.nbcchicago.com/news/local/judge-approves-92-million-tiktok-settlement-with-illinois-claimants-receiving-biggest-share/2921881/

For a full breakdown of Algorithms F*cking users over: https://metrocoderlog4j.com/5-times-algorithms-fcked-humanity/


Check out https://metrocoderlog4j.com for tech news and more.

Podcast Episode 01

Photo by Tara Winstead on Pexels.com

Apple’s New Satellite Service MetroCoder Log4J

Apple’s New Satellite Service launched today 1. Apple involvement with satellite communication company In Apples latest service launch they teamed up with Satellite company Globalstar (NYSE: GSAT) and Cobham Satcom to provide connectivity to and from iPhone 14 and iPhone 14 Pro devices. “A $450 million investment from Apple’s Advanced Manufacturing Fund provides the critical infrastructure that supports Emergency SOS via satellite for iPhone 14 models. Available to customers in the US and Canada beginning later this month, the new service will allow iPhone 14 and iPhone 14 Pro models to connect directly to a satellite, enabling messaging with emergency services when outside of cellular and Wi-Fi coverage.” "In 2021, Apple announced an acceleration in its US investments, with plans to make new contributions of more than $430 billion over a five-year period." 2. How the SOS system works "When an iPhone user makes an Emergency SOS via satellite request, the message is received by one of Globalstar’s 24 satellites in low-earth orbit traveling at speeds of approximately 16,000 mph. The satellite then sends the message down to custom ground stations located at key points all over the world." "The ground stations use new high-power antennas designed and manufactured specifically for Apple by Cobham Satcom in Concord, California. Cobham’s employees engineer and manufacture the high-powered antennas, which will receive signals transmitted by the satellite constellation. Along with communicating via text with emergency services, iPhone users can launch their Find My app and share their location via satellite when there is no cellular and Wi-Fi connection, providing a sense of security when off the typical communications grid." "Once received by a ground station, the message is routed to emergency services that can dispatch help, or a relay center with Apple-trained emergency specialists if local emergency services cannot receive text messages." 3. Where is the SOS system available Emergency SOS via satellite is available in the US and Canada starting today, November 15, and will come to France, Germany, Ireland, and the UK in December. Links: https://www.apple.com/newsroom/2022/11/emergency-sos-via-satellite-made-possible-by-450m-apple-investment/ https://www.apple.com/newsroom/2022/11/emergency-sos-via-satellite-available-today-on-iphone-14-lineup/ https://support.apple.com/en-us/HT213426 https://ast-science.com https://investors.globalstar.com
  1. Apple’s New Satellite Service
  2. Midterm Election Fuel Misinformation Fears of the Past
  3. AI, Bots trapped in infinite conversation, how machine learning works, AI in the medical industry, Deep fakes and more

Things discussed in the first pod is AI, our personal data, self driving cars. These ultimately all are future technologies that will integrate more into our lives but we need to be mindful of the ethical challenges that we will face as we are exposed to these algorithms.

Links:

Infinite conversation: https://apple.news/AVYW7SCODTXuzAzx_4…

infiniteconversation.com by Giacomo Miceli

Deepfake: https://en.wikipedia.org/wiki/Deepfake

AI in Medicine: https://apple.news/AwBd4E8HwQq6bZ6GxZ…

P-value: https://www.investopedia.com/terms/p/…

Protein folding: https://foldingathome.org/?lng=en

Self driving car: Tesla Legal issues: https://apple.news/AHRjYrJuLTTaKemQH8…

Self driving Car making ethical choice: https://www.nature.com/articles/d4158…


Check out https://metrocoderlog4j.com for tech news and more.

Open-source Dick detecting AI saves humanity


The company behind the popular dating/social app, Bumble, have released an open-source project to detect mens penis’s that are being sent to the DMs. They have released this open-source project to help combat sexual harassment that run amuck in the this filthy digital space.

Now I wonder how they were able to obtain their training dataset for the Artificial Intelligence to learn from? How did they obtain consent of users to have their sexual organ saved and used by a company to build an application with? Does the machine doing the machine learning get harassed in the process?

Watch the Metro Minute on Youtube

Here’s the official release notes from Bumble: https://bumble.com/en-us/the-buzz/bumble-open-source-private-detector-ai-cyberflashing-dick-pics


Check out https://metrocoderlog4j.com for tech news and more.

4 Times Algorithms F*cked Humanity

“Buzz, buzz buzz” my phone vibrates as I receive yet another notification today. 

How long do I wait before I look? 

What could it be? 

The notification could’ve been triggered by any of the plethora of applications I have installed on my phone. 

The tension builds and I begin to feel anxious like a child on Christmas Eve waiting for the clock to strike midnight. Now, my smartwatch heart rate monitor is triggered and warns me of having and elevated heart rate. 

So, I give in. 

I reach for the phone and the enchanting black mirror illuminates to life. I see the message it had waiting for me, which says, “Rain expected in 20 minutes at your location.”

 -_- 

The buzzing and humming of our phone occurs multiple times a day, and it has gradually become the norm. We’ve become so dependent on our electronics and their algorithms while placing a blinding trust in their utility. However, let’s not forget these 4 times algorithms f*cked humanity.


Y2K Bug

“The Commerce Department’s $100 billion estimate covers the cost of testing and repairing computers affected by the Y2K problem from 1995 to 2001. It does not, however, include the money that businesses have spent on new machines to replace older ones that have date glitches, which economists say could provide some long-term economic benefits through productivity gains.

The Commerce estimate also doesn’t take into account firms’ Y2K-related publicity campaigns or the possible cost of litigation stemming from undiscovered glitches. As a result, some economists believe overall Y2K spending probably is somewhat higher, perhaps closer to $150 billion.”

Chandrasekaran, Rajiv. “$100 Billion Price Tag for Y2K FIX / Computer Bug Repair Sets Peacetime Record.” SFGATE, San Francisco Chronicle, 23 July 2020, https://www.sfgate.com/news/article/100-Billion-Price-Tag-for-Y2K-Fix-Computer-bug-2896765.php.

To save some bits of data, some software engineers simply used the last two digits to denote the year. This forced companies to retroactively review code used in production to implement fixes to resolve issues caused by the missing two leading digits of precision in the year.

Overall, the y2k fix was needed to prevent issues that would come up in applications such as scheduling, trend analysis and time-based calculations. For example, banking applications that calculate interest on your bank account or tap into your 401k with or without penalty.

Click for more details of the Y2K bug that caused mass hysteria.


Killer GPS

“We found that single-vehicle collisions comprised the majority of crashes (51 cases, 32% of overall incidents), with crashes with other vehicles (26 cases, 17%) and crashes with pedestrians and bikes (13 cases, 8%) making up the remainder of crash incidents.”

Lin, Allen & Kuehl, Kate & Schöning, Johannes & Hecht, Brent. (2017). Understanding “ Death by GPS “ : A Systematic Analysis of Catastrophic Incidents Associated with Personal Navigation Technologies. 10.1145/3025453.3025737.

People who drive to places that are not familiar to them trust in GPS algorithms to get them to their dream destinations, but end up in nightmare situations. Traveling the great unknown is something humanity has been doing for millions of years using primitive methods such as following the North Star to modern day GPS applications. However, GPS has made traveling so much easier that we sometime take for granted the complexity involved in finding a path to our destination, but is it the safest? 

One unfortunate fatal account comes from a Canadian couple in 2011 looking to take a road trip from their home in British Columbia to Las Vegas, which ended in tragedy because they depended on GPS to help navigate their trip to Sin City. Sadly, the couple took a route through the desert, which the GPS suggested was the best route, and ended up getting stuck in thick mud from the rough terrain they encountered along the way.

The husband, Albert Chretien (59), left his wife, Rita Chretien (56), in the vehicle while he went to search for help to get their vehicle out of the thick mud that they were unable to get themselves out of. Rita was found 7 weeks later in the vehicle, 30 pounds lighter, was rushed to the nearest hospital and was able to recover fully. However, Rita’s husband, Albert, was found dead about a year later by hunters.

For more details of the tragic accident cause by GPS error


Self-driving to Death

“Krafcik (John Krafcik, CEO of Waymo (owned by Google’s parent)) said that you have to be sensitive to the real losses people suffer in accidents such as these. But, he added, achieving the bigger picture — eliminating the roughly 35,000 annual auto fatalities, largely due to driver error — means not being deterred by the “bumps in the road” to accident-free driving. Krafcik was basically putting a fresh coat of paint on an old, rarely spoken platitude: People must get killed en route to a better, safer transportation system.”

Korman, Richard. “Give Us the Risk Reality.” ENR: Engineering News-Record, Aug. 2018, p. 52.

The future certainly is here and the wealthiest man in the world now owns one of the trail blazing companies introducing “autonomous” machines to our roads. However, with “beta” version software being deployed in the Tesla Auto-pilot, one has to question the transparency to consumers and general public of the risks involved in using these algorithms that are controlling 2 ton metal machines armed with li-ion batteries on wheels doing 70 MPH on a road near you. It’s a marketing genius but one which puts people in danger by misleading customers by stating a Tesla is a “self-driving” vehicle. According to the 5 level classification of autonomous vehicles where 0 is lowest level and 5 being full autonomy; a Tesla, at best, is level 2.

With the slick marketing, a Tesla customer was unfortunately too reliant on these “self-driving” features and ended in a fatal accident with a truck.

Click for more details of the first fatal autonomous car accident 


Bye, Bye First Amendment

“According to internal materials reviewed by The Intercept, Dataminr meticulously tracked not only ongoing protests, but kept comprehensive records of upcoming anti-police violence rallies in cities across the country to help its staff organize their monitoring efforts, including events’ expected time and starting location within those cities. A protest schedule seen by The Intercept shows Dataminr was explicitly surveilling dozens of protests big and small, from Detroit and Brooklyn to York, Pennsylvania, and Hampton Roads, Virginia.”

Biddle, Sam. “Police Surveilled George Floyd Protests With Help From Twitter-Affiliated Startup Dataminr.” The Intercept, 9 July 2020, theintercept.com/2020/07/09/twitter-dataminr-police-spy-surveillance-black-lives-matter-protests.

What should we do when we witness injustice? Usually, we think of spreading information, organizing and protesting. However, since we are using the World Wide Web, the powers that be also have access to your ideas that you are spreading about. Now, with social media platforms already tracking and cataloging your data, it made it much easier to suppress activism. For instance, there is a company, Dataminr, that reportedly tracked BLM activist on social media and was helping the police now of these activist planned activities.

Click for more information the Dataminr story


These four events have occurred, yet, we are still alive and breathing. However, we should reflect on these events so that we learn not to have blind faith in applications or their creators. We trust these applications we use that we often neglect to read the terms and agreements. We are so quick to click the “I Agree” checkbox and “Next” button to finally have access to the latest application everyone is buzzing about, but don’t know what we traded to gain access to this application. The old adage goes “if something is too good to be true that’s because it probably is”, yet we haven’t applied it to the technology we use.

Life is more complex than what any algorithm can currently predict or handle. We, the human element, are the counter balance to make sure that technology and technological companies do not make life-altering decisions without our final say. Using our empathy and compassion, which is what separates us from machines, we will need to provide the checks and balances necessary to prevent questionable use of these algorithms that are so integrated in our lives. I ask that my fellow software engineers and the tech companies that employ us be more transparent with AI being deployed and used on society, so that the public gives their own risk assessment on the decisions being made by AI potentially based off of training data sets that may be bias or skewed.

Exit mobile version