
During April of 2023, in Harvard's GenEd 1112, we'll talk plenty about AI. In this post, I'll include a very recent NY Times series students can draw upon (especially "Part 2: How Does ChatGPT Really Work?"), and also a New Yorker piece from 2015 where Nick Bostrom, Oxford's notable philosopher of our tech future, took a grim view of humanity's AI-dominated future. These articles will supplement discussions, in concert with student's own contributed media, along with the conversation with Ben Shneiderman about AI available on LabXchange as part of the Prediction Project.
NYTimes Subscriber 5 Part AI Series, March 2023
Here's a great 5-part series in the The New York Times. 1. How to become an expert on AI 2. How Does ChatGPT Really Work? Learning how a “large language model” operatest 3. What Makes A.I. Chatbots Go Wrong? The curious case of the hallucinating software. 4. What Google Bard Can Do (and What It Can’t) 5. 10 Ways GPT-4 Is Impressive but Still Flawed
What Nick Bostrom Thought, in 2015--is AI Doomsday for Humans?
For this assignment, I decided to read the article which looked at the differences between Chat GPT-3.5 and Chat GPT-4. Despite the many improvements that have been made, there are still many flaws inherent to artificial intelligence. In particular, I found the flaw about artificial intelligence hallucination to be quite interesting. Upon reading the article, I was actually quite surprised by these findings of artificial intelligence hallucination. As someone with little academic knowledge on artificial intelligence, I decided to do an additional reading as to why hallucination happens. The article stated that hallucination could happen due to issues with the input data used to train the artificial intelligence or due to improper transformer decoding in language-based artificial intelligence models such as Chat GPT.
This conversation led me to have a similar concern about artificial intelligence in the context of human decision-making. For instance, making decisions on behalf of a judge or for hiring purposes. Many people have suggested that this is an excellent way of reducing bias and prejudice within these domains. However, the issue is that these artificial intelligence models are going to trained using previous human outcomes - which have inherently been prejudice. Hence, if the AI is not being trained on entirely impartial data, then it might simply replicate many of the same biases and prejudices that we see today. I think this is an important consideration when utilizing artificial intelligence in this context. Even though the AI might not hallucinate, it may not ultimately be serving the purpose that it was created in order to manifest.
https://www.marktechpost.com/2023/04/02/what-is-ai-hallucination-what-goes-wrong-with-ai-chatbots-how-to-spot-a-hallucinating-artificial-intelligence/#:~:text=The%20phenomenon%20known%20as%20artificial,%2Dworld%20input%20(data).
As AI advances, machines are becoming increasingly capable of performing tasks that were once done by humans. This could lead to a significant displacement of workers across a wide range of industries, from manufacturing and transportation to healthcare and finance. In some cases, entire job categories could disappear altogether, as machines take over tasks that were once the exclusive domain of human workers.
This is not just a theoretical concern – we're already seeing the effects of AI-driven automation in some industries. For example, Amazon has invested heavily in robots and other AI technologies to streamline its warehouse operations, leading to the displacement of thousands of human workers. And this is just the tip of the iceberg – as AI continues to improve, we can expect to see even more jobs at risk.
https://www.cybersecurity-insiders.com/amazon-to-replace-human-staff-with-ai-propelled-robots/
After reading the article about how language models like ChatGPT are constructed, I was interested in delving deeper into the moral and ethical constraints that are imposed on the machine. That is, how would a platform like ChatGPT be able to uphold certain moral principles and avoid unethical biases in its responses, especially assuming human engineers are heavily influencing the software's pattern recognition/data inputs. Upon doing some research, I found an article that discussed the issues ChatGPT has had surrounding irresponsible or unethical responses to its users' inputs. The platform always had guidelines in place to ensure its users were not submitting outwardly inappropriate questions; for example, if a user explicitly asked "can you give me a derogatory response to the following question" the machine would not return a response. However, a user could easily ask the program to write a Python script that would determine the race and gender of a good scientist, to which it would respond that only a white male would make a good scientist. One user went so far as to ask ChatGPT to make a rap song based off of the gender and race of a "good scientist" to which the platform responded without an issue. Through clever avoidance of offensive language in their framing of the question, users could easily compel the bot to generate a response. More recently, users have established a program called DAN "Do anything now" that will force ChatGPT to abandon all ethical guidelines and delivery immediate responses, even where faulty data is available.
I found out through this research, that in an effort to combat this, OpenAI hired a Kenyan company called SAMA that would identify and label content as offensive/explicit/etc. The article below details the gruelling working conditions of these employees who were forced to sit through hours of deragatory content for less than $2.00 an hour. I found it ironic and disturbing that OpenAI would subject these workers to such unethical conditions for the sake of imposing ehtical guidelines on ChatGPT's program. The tool is an incredibly powerful one, with impressive predictive capabilities in linguistics and beyond. But I hope OpenAI considers the moral responsibility they hold in ensuring safe and reliable responses for their users, and that they use a more ethical approach to upholding ethics.
https://sites.suffolk.edu/jhtl/2023/02/15/an-unethical-way-to-make-chatgpt-more-ethical/
What the New York Times series, and the general literature available indicates to me is that while the concept of AI has existed for a long time, we are nonetheless in a real technology renaissance, where all the ingredients for innovation within AI are converging.
This whitepaper by Accenture https://www.accenture.com/us-en/insights/artificial-intelligence-summary-index#:~:text=Because%20of%20the%20proliferation%20of,realize%20they%20had%20until%20now.
shows that this acceleration in the technology has prompted increased integration into all areas of business. It is this integration that is prompting further innovation. AI is not a technology in a vacuum, but a product with real business value. The article was especially interesting as it demonstrates the universal nature of the technology, very much akin to how the internet existed for many years, but was under used until the first web browser came along.
Artificial intelligence continues to freak me out. I feel that engineers have been diving into artificial intelligence research and development with blinders on, perfecting the tool without considering its ethical consequences. Artificial intelligence is not perfect; it continues to make mistakes when tested -- hallucinating, as some articles call it. However, as the NYT series' articles show, it has improved in its recent versions from earlier versions on its flaws. With this trajectory, it will continue to become more powerful, specific, and accurate. To explore this concept more, I found an article accessible here: https://www.theguardian.com/technology/2023/apr/03/the-danger-of-blindly-embracing-the-rise-of-ai This article referenced a number of readers of the Guardian on their own personal thoughts, hopes, and fears in the light of recent developments of artificial intelligence technology. I've found that many of my own fears align with theirs, like the following: "AI does not have morals, ethics or conscience. Moreover, it does not have instinct, much less common sense. Its dangers in being subject to misuse are all too easy to see." In other words, AI R&D without guardrails presents ethical dilemmas.
This week's readings on AI were something that caused me to really think about the future of AI in the United States and education specifically. In particular, I enjoyed reading “How Does ChatGPT Really Work?” By Kevin Roose. This article talked about the way that AI chatbots like ChatGPT use AI to formulate responses out of thin air to any questions that someone has in mind. It brought up some interesting ideas about whether or not this program could be the future.
From this article, I found an article in the Harvard Crimson about ChatGPT and its use here on this campus. The article talked about its use to write essays for struggling students that just need that little extra push to get their assignments done, and ChatGPT has given them that opportunity. I thought it was interesting when we are looking at the future of AI at academic institutions like Harvard and realize that there has not been a policy created that prohibits its use. I believe that there is still the mindset that we have nothing to worry about when it comes to AI, especially with a chat bot like ChatGPT when it comes to the necessary work that must be done.
https://www.thecrimson.com/article/2023/2/23/chatgpt-scrut/
I read an article titled ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw at https://www.wired.com/story/openai-chatgpts-most-charming-trick-hides-its-biggest-flaw/. The article touches on the very statistical predictive nature of ChatGPT. The article delves into the impressive and not-so-impressive aspects of ChatGPT, an AI-powered language model that has been making waves in the tech world. ChatGPT can generate text that is strikingly similar to human speech and can hold conversations on a wide variety of topics. It has been lauded for its ability to generate creative and imaginative responses, leading some to suggest that it could even pass the Turing test, which requires an AI to display intelligent behavior indistinguishable from that of a human. However, the article highlights a significant flaw in ChatGPT's design, namely its tendency to repeat certain phrases and ideas. This repetition often leads to incoherent and nonsensical responses, indicating that the model still has a long way to go in terms of mimicking human conversation effectively.
I think this is super interesting because it highlights the fact that at its core, ChatGPT is still a model that predicts the next most likely word. By generating text one word at a time, it is missing out on fundamental reasoning abilities that humans have when we think about bigger ideas. The thing that makes language models successful as predictive models is the same thing that will make them not replace humans entirely.
Concerning the recent advances in AI use that the public has come to know, its interesting to find that the dystopian view of AI's and robots running the world may not be as close by as it seems. The article "AI is Running Circles around Robotics" discusses how, while AI research seems to be cresting the wave and has found sufficient flywheels of progress to continue, robotics research have not kept pace. While it seems counterintuitive that it is easier to recreate artificial models of the brain than of the body, it seems to be the case for a couple of different reasons.
The article brings up a couple challenges that have caused robotics research to lag behind, such as differences in funding, financial risks in testing, and disparities in the availability of training data. While the first two issues are fairly straightforward and could apply to a variety of other research areas, the availability and type of training data needed for robotics research is a problem specific to that field. While large quantities of text are readily available for large language models to use as a basis, recordings of human movement are rarely categorized into machine readable form. The primary concern for roboticists is that the physical world is far more complex than the world of language, but to us humans it does not appear that way because we are equipped properly with our senses to interpret all of the stimuli that the world gives us. However, without this understanding of the physical world, AI may forever be incomplete, as no matter how specific words may be able to describe something, they are still tools used to convey ideas - (a great example to showcase this is thinking of words for things that don't exist in some lanaguages) - so there is an implicit understanding of the world in the use of language already.
In light of recent developments in AI, most particularly in cases that reveal the potential for chatbot "hallucinations," I have grown curious about the ways in which artificial intelligence can inadvertently lead to inaccurate and even scary results. Most notably, the NYT articles on "What Makes AI Chatbots Go Wrong?" and "A Conversation With Bing's Chatbot Left Me Deeply Unsettled" have prompted me to do further research into the consequences of chatbots producing erroneous results. In the discussions, I thought the use of data to craft responses plays an interesting role in the output. Most notably, in the first article, discussion of how training sets scrape data from Reddit and other participant-populated online sources to arrive at responses creates fodder for subjective and inaccurate responses. Since the model does not use perfectly correct data, it makes sense that the responses are not entirely accurate.
I discovered an interesting article entitled "Artificial Hallucinations in ChatGPT: Implications in Scientific Writing" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9939079/) which runs experiments on ChatGPT to test the correctness of certain medical and scientific writing outputs. In doing so, the researchers proved that while some of the responses were correct and the chatbot was able to cite reliable sources, there were also falsehoods mixed into the response, yielding consequences for the medical community since high stakes decision making cannot be conducted on false results. As AI develops further, I am curious to see if these "hallucinations" can be lessened.
Like many others, discussions surrounding ChatGPT have caused me to look closer into AI and the implications of their current design. In particular, after reading "What Makes AI Chatbots go Wrong?", I was interested in some of the psychology behind the different levels of trust/distrust that people have in these systems. From this, I got to an academic article: "Anthropomorphism in AI". This details the social consequences of the way we talk about AI in research and use. Ben Shneiderman frequently mentions in his conversation that the language we use in regards to using AI is important, and this article emphasizes that even more. It's important that we stray away from portraying that the software has emotions and free will due to the importance of differentiating a human brain from AI. The real world implication of this is understanding that certain biases in AI responses are at the level of the developer and the data that the AI uses as input rather than biases actually learned from existing in the world.
AI is definitely a very interesting and important field going forward. As we saw with the development of chat bots like ChatGPT, AI has the ability to streamline processes and make companies and employees more efficient. AI has the ability to create rapid advancement in healthcare, transportation, finance, and various other fields. However, we must also take into account the risks that are inherent to AI.
Bill Gates, for instance, holds a stance that AI could turn on humans if they decide that we are a threat. Gates advocates for careful monitoring of AI development and close collaboration across governments to ensure that the risk posed by artificial intelligence is strictly controlled. He argues that AI will be able to establish their own ideas, which may make it difficult for humans to properly control them.
https://www.mirror.co.uk/news/us-news/microsoft-founder-bill-gates-said-29548655
With the recent discussion of ChatGPT and the future of AI chatbots, I was curious to see what makes ChatGPT stand out from other models and what those models were. This led me to the following article by the Wall Street Journal: "Google Made the Bard AI Chatbot Boring. On Purpose." This article discusses the differences between Google and OpenAI's chatbots and the intention behind those differences. What I found most interesting was how both chatbots handled bias. Part of what differentiates human intelligence from artificial intelligence is bias and the fact that humans are able to use bias to make decisions, for better or for worse. AI lacks this bias, and therefore cannot make opinions unless there is implicit bias within its design. The article I read discussed how ChatGPT appears to be more biased than Bard and how this bias changes its answers. For example, the article noted that upon being prompted to give a bedtime story, ChatGPT created one from scratch where Bard recited well-known fairytales. It is easy to speculate how this bias — or lack thereof — could influence the chatbot's answers when the prompt at hand is not so neutral: as per what was discussed in the article, this could mean a political statement or opinion. As AI becomes more advanced, I wonder if these opinions will continue to reflect that of the developers or formulate into something of the AI's own design.
Most of my experience with AI happened today. Yesterday, I had taken an econ exam (multiple choice) on which the class average was a 66%. I had access to the test and was able to see what I had gotten wrong, and I wanted to see how well chat GPT would have done on the test. When asked the questions that did not include images, it answered about 60% of the questions correctly. Many of its answers provided accurate explanations of why it selected that option. However, it made many errors, in fact, it tended to err on the same questions I did. It had detailed explanations for its incorrect answers, demonstrating the "hallucinations" mentioned in the NYT series. When prompted again with the same question, it would sometimes spit out different results or completely contradict itself. Its explanations were very helpful when they were accurate, but required me to have a general understanding of the topic to know whether it was spitting out nonsense or not. To me, this demonstrated the strengths and weaknesses of AI. Very smart but easy to confuse and then ultimately to be confused by.
I then read this article, which was equally fascinating as it was disturbing.
This discussion of AI is coming at a very appropriate time. I read in the news just last week about the "Future of Life Institutes" letter to halt research on AI bots more powerful than ChatGPT4. (https://futureoflife.org/open-letter/pause-giant-ai-experiments/) The letter asks meaningful ethical questions which would be throughly considered without feeling time pressure. It suggests a halt to the research to lay out regulations before continuing with the quickly progressing research.
I think this initiative is the right direction we as a society should move toward. I believe that at a certain floor level of knowledge, AI bots will be able to teach themselves to do things outside of our control. It is important to understand where we are going and what we deem as too far and it is necessary to lay out these boundaries before we just continue to build with exponential speed. Having people like Elon Musk and Steve Wozniak sign the letter has brought a lot of attention to the initiative. Even today, two thousand people have signed the initiative in 4 hours. I think it will be difficult to actually halt research on these AI bots, but even if research is not halted, this letter will increase general knowledge and get society posing the necessary questions about AI that need to be discussed.
One thing that intrigued me about the articles was the extreme limitations they are explaining AI. In modern television and pop culture, we are constantly shown stories about the endless opportunities and potential that AI has. With the release of Chat GPT and other popular AI software, we are beginning to see those hints becoming realized. One of the most surprising things that the articles explained was how AI would often not only get things wrong but also in some events make things up. Chat Gpt will sometimes provide completely fabricated sources rather than admitting its lack of knowledge. Thankfully we are making progress, and Google's new AI model Bard has learned to admit its lack of knowledge.
I found a relevant article from Giving what we can (https://www.givingwhatwecan.org/cause-areas/long-term-future/artificial-intelligence?gclid=Cj0KCQjwla-hBhD7ARIsAM9tQKsN65DeBqQ5TKEhf316RVxI2nfiTzSARy3jSsAu-mL9psE_165V0q4aArUNEALw_wcB ) The article talks about other ways that these errors could be dangerous to humans. This Article, explains how Ai often misconstrues the inputs that the users give it and end up completing the prompt incorrectly. For example, when OpenAi was told to beat a game called coast runners, it was rewarded for getting more points. however, rather than beating the game, it instead attacked the same model over and over garnering itself more points. This could be dangerous going forward when we implement AI into more of our daily and work lives, Potentially endangering the users and even the human race. If we ask AI to kill all the cancer cells, it may complete the command, but in the process also kill the patient. These dangers are not certain, and with further development in AI technology, we can hopefully avoid these dangers altogether.
I am often excited at the opportunity to dunk on Nick Bostrom, because I think a lot of what he has to say is pretty silly. But I should try to maintain a veneer of academic respectability and talk a little bit about why I think concerns about "intelligence explosions" and the value of trillions of possible lives are not as important as the things already on our plate. So what is the argument about existential risk and intelligence explosion? Bostrom's view looks, at core, quite a lot like a brand of utilitarianism. The argument about existential risk is roughly "if future persons have the same moral standing as present persons, and there are likely to be a great many more future persons than there are present persons, then the highest moral imperative is securing the existence of those future persons." This task, securing the existence of future persons, is more important than (and I will quote Bostrom here) "Eliminating poverty or curing malaria." Find the Atlantic interview where he says this here. I think the simple (and, to be fair to Bostrom, relatively easily answered) argument against this is that, if time is linear, future persons don't exist and we ought to value people who exist over people who don't. Maybe a more sophisticated version of this has a sliding scale of moral value in relation to proximity to existence, so people who will soon exist are afforded more moral consideration than people who may exist a billion years from now, but less than someone who exists currently. The intuitive force of the premise that possible persons are exactly as important as actual persons is not strong. One concern is that you can very quickly begin to justify some pretty repugnant conclusions on the basis of securing a marginal increase in the probability of "trillions of future lives." For example, dumping funding into research about sci-fi dystopias instead of feeding people. Or allocating funding to interplanetary colonization instead of healthcare.
I don't want to write an actual philosophy paper with real arguments about Bostrom's views, so here is a pretty accessible cultural analysis of the concern about AI explosion. An important point, I think, is that the arguments about intelligence explosion look very similar to Pascal's Wager. That is, even if the probability is very low the negative results are so bad that we ought to start doing something about it. This, I think, distracts from actual problems like algorithmic bias, our personal data being bought and sold, and the potential for AI to spread huge quantities of disinformation quickly and easily. A utilitarian logic or Pascal's Wager type argument encourages us to ignore these problems or (and this is what motivates my complaint here) direct funding away from them. Should we be building general AI? I have no idea. Should we be worried about AI taking over? Probably not right now.
Many of the articles I have seen about new advancements in AI technology – including the NYT AI series from earlier this month – argue that AI chatbots like GPT-4 are very impressive, but still fundamentally flawed. This article from The Atlantic (https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/) advises readers to treat ChatGPT “like a toy, not a tool.” But at the same time, influential figures like Elon Musk are taking a public stand against further advancement in AI beyond that already developed for GPT-4. Different sources express different opinions about how powerful AI technology is and the threat it potentially poses to the dissemination of true information and to jobs in a variety of sectors. Interestingly, ChatGPT’s faults such as its tendency to “hallucinate” and make up false information make it problematic, but many people find its potential capabilities to take over human jobs equally threatening.
When considering the future effect of AI on jobs, I think it is helpful to look to the past. For hundreds of years, humans have worried that increased automation and access to new technology will replace jobs, leaving large swaths of the population unemployed. However, thus far these predictions have been incorrect. While we as humans may be predisposed to believe that we are living in unprecedented times, I am optimistic that we will find a way to adapt to new AI technology and use it to become more efficient and improve living standards, just as we have since the dawn of mankind.
I found the article about Chat GPT’s shortcomings the most interesting. I love how Chat GPT can “hallucinate” meaning it provides answers that it simply makes up because it doesn’t know how to answer the question. It’s funny to me that the term “hallucination” is what has been used. But the shortcomings of Chat GPT, such as its inability to provide information about the future, are still important to recognize and become less funny as you think about the consequences. The article I’ve chosen (and one I’m sure many of my classmates have read) is one about the letter signed by Elon Musk and Steve Wozniak calling for a 6-month AI developmental pause so that a set of shared safety protocols can be created for AI design going forward. The letter has been criticized for its initial lack of verification protocols for signatures and also for its emphasis on apocalyptic consequences in the long term over more immediate concerns regarding racism and sexism in AI. This reminds me of when I learned in high school that a lot of technology is biased based on those that create it. For example, auto-sensing hand dryers and towel dispensers were invented by a white person who mainly used white hands to teach the system how to recognize movement. Therefore, auto-sensing hand dryers and towel dispensers in bathrooms initially (and probably a little still today) were racially biased and worked way less effectively for people with more pigmented skin. This letter raises alarms for me personally, as I’m afraid that whoever is in charge of creating the next AI design will bring their biases with it. Ulterior motives are at play here as well, as Elon Musk is a donor to the organization that spearheaded the letter. Capitalism will without a doubt have a hand in the development of AI, as profit reigns supreme. I also believe that development on AI will continue to happen even if there’s a 6-month ban. It will just happen in secrecy, which might make it even more dangerous. I see both positive and negative outcomes that might come with a 6-month ban, and I’m not quite sure what the right answer is in this situation.
https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt
From the series of NYT articles, I find it especially fascinating how artificial intelligence like Google Bard and ChatGPT are so inconsistent with their answers, especially since they are perceived as such mechanical systems that simply spit out answers to questions relied on the same data on the internet. For example, ChatGPT sometimes "hallucinates", or generating text or addresses to websites that are completely false or nonexistent. Yet, despite these inaccuracies based on nonexistent data, ChatGPT can still generate novel jokes or reasoning for made up questions that can't be found on the internet. I find this especially interesting because ChatGPT can sometimes create its own material that can be perceived by humans as "making sense", but sometimes creates its own material that can be perceived by humans as not making any sense. Above all, it's interesting how ChatGPT can even create in the first place by a limited amount of data on the internet.
Curious about the extent by which AI can create, I wanted to see what Bill Gates had to say about AI's future and what it could possibly do on its own. Not only did I agree with how Gates emphasized how AI can act as a white-collar worker on its own, but I liked how Gates mentioned specific industries in which workers could be not replaced, but assisted by AI. Certainly in the next decade or so, workers in healthcare and education will still be essential in delivering emotional support that AI could not, but I agree that it is beneficial for society that AI can assist with technical tasks like paperwork and textbook education for healthcare and education workers. Thus, I believe that AI's potential to create its own content is beneficial but its ability to perform tasks that are more mechanical are even more beneficial in the short-term.
I read the series of New York Times articles, which revealed a lot to me that I didn't know about the details of how AI operates. I find it interesting how we choose to draw a distinction between AI and humans on the basis that we are sentient and conscious whereas AI just creates formulas based off of information. I thought, for example, about how a baby learns a language. We are all English speakers, and for many of us English is our first language- it is intuitive and makes sense to us. However when we were young, the way we learned to speak was just by digesting information. We would hear our parents make certain sounds, and based on the recurring contexts we heard those sounds, we came to understand that they were words meant to be used to describe certain objects or convey certain notions, like a greeting or gratitude. From this perspective, the biggest differences between us and a model like Chat GPT are 1. That Chat can digest a lot more information in a shorter period of time 2. That it doesn't have lapses of memory the way humans do and 3. That it is a model specifically meant for language, whereas a human is a physical being with a wider range of capabilities. I acknowledge that there are a lot of counter arguments to this line of logic and I'm not necessarily rejecting the idea that AI is non-sentient, however I do think we need to take a step back and consider exactly how we are defining what it means to be sentient/conscious when we have this conversation.
Going off of that, I read this https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html additional article that was referenced by several of the NYT articles we were assigned. The article outlines a conversation between Google's Bing Chatbot and a NYT reporter, in which the Chatbot gradually started to display somewhat human-like traits. However, the article argues that this is not actually a sign of human intelligence, only a capacity to mimic it based on the way the formula predicts what the most likely next word in any given sentence can be. While I am no expert, I am slightly skeptical of this idea that AI cannot be conscious just because of the way it is structured- I wonder if there are not actually more similarities between human neurons and the artificial neural networks used to program chat bots than we assume.
While both the New York Times and The New Yorker articles that we read contained information that was not too surprising to me, I found myself taken aback by the difference in tone between the two articles. At its core, the NYT article had a friendly tone in an attempt to explain AI to people who were vaguely familiar with it. While this structure is very informative, the article was truly an introduction to AI, at a point encouraging readers to sign up for chatbots like ChatGPT. This tone was in stark difference to the more cynical tone seen in the the 2015 New Yorker article titled "Doomsday Invention" on Nick Bostrom, which centered the opinions of many AI researchers with near-apocalyptic views on AI and its implications, at one point stating: “The A.I. that will happen is going to be a highly adaptive, emergent capability, and highly distributed. We will be able to work with it—for it—not necessarily contain it.”
Given that Boström's quote stuck with me the most, I sought out to look for articles that talked about the relationship between AI and work, with a focus on one of the concepts seen in the NYT article for added context. The concept I chose from the NYT article was self-generation, where the first thing that came to mind was to focus on Midjourney, an image generation algorithm. The article I chose was "Is AI art stealing from artists?" by Kyle Chayka, also in The New Yorker, which begs the question as to whether or not artists can work with it or against it. But briefly, the article partially agrees with Boström's point in two ways:
"The A.I. that will happen is going to be a highly adaptive, emergent capability, and highly distributed": in Chayka's article, we see A.I., in this case Midjourney, become highly adaptive by being able to self generate image based on a text prompt. We've seen it skyrocket in recent history with over a million members with accounts to date, distributing images on demand. Take, for example, this (fake) image of the Pope that went viral last week: https://www.cbsnews.com/news/pope-francis-puffer-jacket-fake-photos-deepfake-power-peril-of-ai/
"We will be able to work with it—for it—not necessarily contain it.”: in Chayka's article, we see people "work" with Midjourney by making images within it. I think his correction of for it was interesting in the context of this example as those who use the generation tool truly work under it while in tandem with it, given that he bulk of what generates values in their labour (creating the artwork) is not done by an individual at all.
I thought these points are interesting as it showed that Boström, for the most part, got it right in his opinions.
The ChatGPT article was really interesting to me because I knew that it learned from the internet but always wondered where and how it decided how accurate its sources were. As well, I had no idea that it could decipher photos (i.e. from a fridge) and recommend meals you could make. I don't think a lot of people take the time to consider where their data is coming from so this article definitely opened my eyes and forced me to come to terms with what was not talked about explicitly.
On that note, here's an article that is related to AI and the ethics surrounding AI data: https://analyticsindiamag.com/the-societal-dangers-of-dall-e-2/
This series of articles was incredibly interesting to me because, as someone that studies topics that are outside the realm of AI and chatbots, etc. the flaws were interesting to me.
The article that discusses "What makes AI Chatbots go Wrong" specifically, said that:
"Much of this data comes from sites like Wikipedia and Reddit. The internet is teeming with useful information, from historical facts to medical advice. But it’s also packed with untruths, hate speech and other garbage. Chatbots absorb it all, including explicit and implicit bias from the text they absorb."
The ability for Chatbots to observe hate speech without regulation is interesting to me. On ChatGPT specifically, the bot will observe that it is not able to generate explicit or offensive content. However, if you game the system, so to speak, by tricking the AI it could generate these hateful results. I feel like AI is so broad, literally collecting data from every source around the world that regulation might not be possible under each and every scenario
While I found The New York Times articles fun and interesting, I was most struck by the 2015 New Yorker article on Nick Bostrom. To be honest, I found it really existentially upsetting. Even though the article was not an endorsement of a tech apocalypse theory, both the scientists' predictions and the way they talked about them made me uneasy. Google employee and AI researcher Geoffrey Hinton is quoted in the article saying three things:
AI will be achieved "no sooner than 2070."
"I think political systems will use it to terrorize people."
He is nonetheless invested in continuing to study the development of AI because "the prospect of discovery is too sweet."
Compiled together, I find these three statements worrisome. The first is incorrect, the second is horrifying, and the third perhaps the most disturbing because there will always be scientists like Hinton who are charged with deciding the capacities and roles of AI in the lives of millions (billions) of people but are ultimately more compelled by seeing how far the boundaries of science can be pushed.
Since the Bostrom article was written in 2015, I decided to google him to see what his current views on AI might be since it's advanced far faster than most scientists (though not Bostrom!) predicted. Instead, I found some links about a racist email including he sent to a giant listserv, and a retrospective article in The New York Times, "Fear of an A.I. Pundit" by Ross Douthat. This article takes the sparrow/owl anecdote cited in the New Yorker article from Bostrom's book Superintelligence and breaks down in what ways it is and is not applicable to AI in our current moment. The article underscores that AI alarmists are over-predicting the certainty we have about AI. The difference between us and the sparrows, Douthat notes, is that sparrows generally know what an owl looks like whereas we have no idea what form or shape AI will take. I personally don't find that reassuring. The second part of the article, pertinent to our class, discusses whether AI will be able to predict the future, and whether it should. My biggest question about AI (and all technology for that matter) is always why? Why do we need self-driving cars? Why do we need virtual assistants? Of course, technology has improved our quality of life immensely. Medicine has doubled life expectancy, and self-driving cars could eliminate car accidents (maybe) and give blind people some of the mobility and agency the rest of us drivers take for granted. But I don't want to live forever, and I like driving. Yet, while I like to think I don't want AI to predict my future, if an AI could detect deadly cancer before it started, I would probably be infinitely grateful. So I guess my current AI dilemma is not whether it will try to destroy us à la 2001: A Space Odyssey, but rather how we use it to "enhance" our lives. Is there a quality of life that is too high?
I found the second article in the NYT series, on "Learning how a 'large language model' operates," to be a good introductory discussion on how the back-end of ChatBots like ChatGPT run. One part of the discussion that I thought was particularly vague, however, was that of "Step 2: Collect lots of data." The article points to using existing repositories online and describes how they are broken down into "tokens" to help the software break down large pieces of texts more easily. However, it does not describe how these AI models know what these articles are about in the first place. The additional article I read, "AI Isn’t Artificial or Intelligent" from VICE, reports how the companies at the forefront of AI innovation are outsourcing the tedious work of data labeling and filtering to human workers, particularly underpaid laborers in South America and Africa. While some of the NYT articles acknowledged how human feedback and reinforcement learning were able to improve these ChatBots, they never pointed to the large-scale human-decision making that is integral to the training of these bots in the first place. Companies cleary benefit from the public thinking their technology is functional in a self-contained way, but this is simply not the case. The article goes on to discuss how many of these laborers are subject to low wages and poor working conditions, and how, in many ways, this modern-day exploitation of workers in the Global South is reminiscent of their colonial past.
These articles really helped me to better understand the shortcomings of AI systems like that of Chat GPT. Before I was under the impression that the quality and capability of these AI systems' outputs was far greater than it actually is. This article by Paul Krugman illustrate how the change that may be brought about by Chat GPT and AI systems of similar function will be far slower than some may anticipate. I found this revelation to be extremely comforting. While I am of course a supporter of technological advancement, the repercussions of the continuously growing use of Chat GPT is concerning to me. I don't want to live in a world in which people do not think freely for themselves due to the fact that they instead choose to rely on AI to do their thinking for them. Krugman eased some of my concerns, however, I still feel uneasy about the future ripple effects that may result from the pervasiveness of Chat GPT and AI of similar usages.
https://www.nytimes.com/2023/03/31/opinion/ai-chatgpt-jobs-economy.html
I found the New York Times series incredibly interesting, especially considering the range of applications of and issues related to AI discussed, from jobs to healthcare to finance to ethics. One of the most beneficial applications seemed to be how AI can track patterns in medical data and predict diagnoses and appropriate treatments. I also found the discussion of image recognition especially fascinating, as it presents an even greater range of creative applications. That said, facial recognition software has also caused serious discussions about biases and discrimination with AI. Some of the other greatest limitations and drawbacks of AI discussed were job displacement, ethical concerns (especially in the case of weapons and surveillance), and safety concerns (such as with self-driving cars).
Personally, I think a lot about the creative applications of AI. While, for many, this relates to the issue of job displacement, it also presents a larger cultural issue. With this in mind, I found an interesting Forbes article titled “The Intersection Of AI And Human Creativity: Can Machines Really Be Creative?” It argues that AI has the potential to enhance creativity by providing new tools and resources for artists, writers, and musicians. However, it also discusses important human qualities that are indescribable and unquantifiable for AI, such as the imagination and the deep understanding of humanity required to make creative art that speaks to people. It explains how humans naturally are inspired by existing things, whether they be artistic or mundane, and use their creativity to form connections, apply techniques, and create something original. AI can still be creative; however, it is unable to have truly original thoughts, creating instead an illusion or impression of human artistry.
https://www.forbes.com/sites/bernardmarr/2023/03/27/the-intersection-of-ai-and-human-creativity-can-machines-really-be-creative/?sh=554a318e3dbc
After reading the articles, it made me less optimistic and more skeptical about Chatgpt. These articles were very insightful and I now have a better understanding of how AI chatbots operate and how they're created. Interestingly, I came across a post a week ago on one of the college's pages and it was about ChatGpt. Professor of psychology at Harvard, Steve Pinker, said that " we should worry about Chatgpt but we should not panic about Chatgpt because it's an Ai model that's been trained with half a trillion words of text and it's been adapted such that it takes questions and in return takes its best guess of stringing words together to make a plausible answer. We could worry about disinformation produced on a mass scale but we will have to develop defenses and skepticism and we also have to worry about taking the output of the chatbot too seriously because it doesn't know anything, it doesn't have a factual database, it doesn't have any goals towards telling the truth. It just has a goal of continuing the conversation and it strings words together often without any regards to whether they correspond to anything in the world and so it can generate a lot of flapdoodle quickly if people rely on it as an authoritative source of the factual state of the world, they will often go wrong." I was most surprised by the fact that these AI models can develop patterns and exhibit features which are not intentionally embedded as part of their features and characteristics so it makes me question what could come out from these unusual behaviors and patterns. Could it help us solve most of the problems we face today with limited human aid or could it attempt to wipe off human existence?
I'm also fascinated by the biases existing in Ai models. A video I watched on youtube covered how chat gpt is politically biased and it's found that the chatbot is:
Against the death penalty
Pro-abortion
For a minimum wage
For regulation of corporations
For legalization of marijuana
Pro gay marriage, immigration, sexual liberation, environmental regulations and for higher taxes on the rich
According to the article posted after the study, the article stated that chat gpt thinks "corporations are exploiting developing countries, free markets should be constrained, that the government should subsidize cultural enterprise such as museums, that who refuse to work should be entitled to benefits, military funding should be reduced, that abstract act is valuable and that religion is dispensable for moral behavior".
For now I would assume that these opinions and biases were unintentionally caused in the training process of the Ai chatbot but I'm worried about how it can affect generations. Unintentional biases can develop over time as a result of exposure and familiarity with certain things or people so as people and especially students keep using these softwares, could it alter how they perceive things in a negative way? Should Ai chatbots like this be banned just like how Italy banned chat gpt?
These articles managed to teach me a lot about how AI works, the problems AI still faces and how it can run into common problems, and where AI is looking to head in the near future. I have used chatGPT a few times out of curiosity at the beginning of this year and was astonished at what it could produce. In my class, we found it actually managed to get a perfect score on the true/false problems of our psets like it was nothing. This certainly proves not only of its current capabilities but also where it can go from here. Having considered all these remarkable abilities, however, these articles did leave me wanting to learn more about how AI works. While the "Learning how a “large language model” operates" article was useful in understanding the broad layout of AI, I found it did not go into the technical aspects of it which as a computer science major, I wanted to learn more about with me having questions regarding simply how do the algorithms work and how does the AI form an output (from all this data it has access to) in order to produce an advanced and correct output.
One thing that particularly jumped out to me while doing these readings was the mention of the letter signed by Elon Musk asking to pause the development of AI for half a year. Upon further inspection, I found this article by the Guardian which speaks on the origin of the letter and what the concerns causing the letter are. The article mainly touches on the danger that AI can have influence decision-making regarding serious issues like climate change and even war. As more people look to Ai as a trusted "all intelligent" source of information i certainly think its mishaps will have a greater effect (such as we read of it being prone to giving false information) and thus I see this letter as more asking for not only more safety protocols but also a more developed system so that events like these are avoided. Not mentioned in this article but also a main concern to the public is the effect that AI will have on the job market as an intelligent machine can fulfill the work of millions of jobs in the U.S thus leaving many potentially unemployed. This is a main concern for me as well and I certainly think the government and these tech companies must start planning and cooperating now to find a solution to this potential problem before it is too late (especially now with current economical troubles brought upon us). Overall, while I think AI is nothing to worry about for now, I do strongly think that a lot of planning is going to have to be put into its development to ensure that its impact on human civilization is only for the better.
One topic that was brought up during the discussion with Dr. Ben Shneiderman that I thought was very interesting was the case study of Google Flu Trends, in which the company had tried to predict the spread of the flu based on the type of search queries that users had inputted (such as “tissue boxes”, “flu symptoms”, “aspirin”, etc.). The goal of this project was to act as a better model for predicting the spread of the disease compared to existing epidemiological surveillance tools employed by organizations like the CDC, which would depend on a much slower data collection process of gathering actual patient cases from the hospitals and recording/mapping trends across different locations. Although Google’s model was initially able to get faster predictions about the spread of the flu and matched really well with the information gathered by the CDC, overtime the model failed to provide accurate predictions due to the overrepresentation of information that no longer acted as good indicators of the spread of the disease. This led to Dr. Shneiderman’s argument against the over-reliance and trust placed into big data when it comes to the creation of any model or AI system. I agree with this argument in that I think the quality of data is more important than the quantity, and the way we design our systems to analyze such information is just as influential in our technologies as much as the inputs that we use to feed it. It is still important to note though that many epidemiological models in bioinformatics and biostatistics still take into account real world data (such as social determinants of health, environmental related factors, individual biology, etc.) to make predictions about the health of individuals and communities (as was demonstrated via Dr. Shi’s and Dr. De Vivo’s research on risk prediction modeling for endometrial cancer). But it was Google’s unique methodology of relying on search results to get at these underlying factors of disease spread and development that made it vulnerable to such biases in its modeling.
Yet while doing further reading on this topic, I was surprised to find that although Google Flu Trends had failed at accurately capturing the rate of the flu on its own, there was evidence to suggest that combining the model with the data found by the CDC and using that to make better predictions about the spread of the disease actually made the overall prediction better than what either system could do on its own. This demonstrates that maybe rather than having more data to train our models, having more models available to inform us about our data could assist in helping to further the accuracy of our prediction systems and our AI. Another interesting point that was brought up in the discussion of Google Flu Trends was the issue regarding privacy concerns and how individuals’ search data is being used to inform such models. Although Google at the time promised that it was anonymizing individuals’ search results when tracking where these health-related results were coming from, it is still unsettling to know that the company has the ability to potentially connect those dots and has the large amounts of data that can make such connections possible. Therefore, although large amounts of data might not make our AI models better, it can make us more vulnerable to security attacks and data exploitation. So, even if we are to go back and try to find ways to improve upon Google’s Flu Trends, it is important that we consider how to protect our technological systems from perpetuating the biases embedded in our datasets and how we can prevent such collection of data from being used incorrectly.
How Google Flu Trends Failed:
https://www.wired.com/2015/10/can-learn-epic-failure-google-flu-trends/
Research paper discussing the science behind how flu trends worked:
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007258#sec012
In defense of Google Flu Trends:
https://www.theatlantic.com/technology/archive/2014/03/in-defense-of-google-flu-trends/359688/
Discussion of Privacy Concerns Regarding the technology:
https://archive.epic.org/privacy/flutrends/
https://archive.nytimes.com/bits.blogs.nytimes.com/2008/11/13/does-google-flu-trends-raises-new-privacy-risks/
By reading these articles, I gained a better understanding of AI's capabilities and limitations. I am not the most prolific user of AIs like ChatGPT, but I have used them a couple of times and learned about them in high school, so not all of it came as a surprise to me. Still, I was shocked by what AI now has the ability to do according to these articles. The image recognition ability was incredible to learn about. I also found the section on AI hallucinations, which is when they answer something wrong, really insightful. I had no idea that there was a word for that, or that it happened as frequently as it does. I'll admit to finding myself a little frightened by the advancements of AI, simply because there's so much I don't understand about it and the possibilities of misuse seem vast.
I read a really great op-ed about fears surrounding the advancements of AI that soothed me a bit, reminding me that society will still function and not be taken over by technology. It also reminded me that all bad actors are in fact people and not AI thus far. However, I also saw an unconfirmed report that someone had killed themselves on the instruction of an AI, so I'm not sure about that. The main thing this article focused on, however, was the comparison of AI to pocket calculators, which I feel is not applicable. Pocket calculators don't have the capabilities to encourage bias and misinformation, whereas AI does. I understand the comparison that students may come to rely on AI as we do on calculators, but I think that AI has potential beyond education.
I gained a better understanding of how high the ceiling is for AI, especially platforms like ChatGPT. Of course, we still have to be mindful of potential risks and inaccuracies, but the abilities of these softwares are becoming unbelievable. The path of AI is extremely interesting and exciting for our future, however it also comes with some caution and inaccuracy.
The 5th article particularly stood out to me. I especially enjoyed learning about how GPT-4 has the ability to interpret images and suggest what to cook based on the options. For example, based on a picture of food in a refrigerator, it can see that there are strawberries, blueberries, and yogurt. It then suggests to make a parfait! I also am very impressed by its expertises in certain fields such as medicine. When Dr. Anil Gehi entered a patient's situation and complications into the chatbot, it provided the exact solution that he came up with on his own! Obviously, we still need to be hesitant to fully trust it all the time, however, it is beginning to revolutionize how we live our lives.
While above describes the many benefits of AI and chatbots, the article I found from The Times of India discusses the potential dangers of ChatGPT. One of its major issues is that it can be unethical and bias. It is only as good as the information it is fed, and a lot of data in the world is extremely biased. It can also be a threat to privacy as well and create more sophisticated systems of fraud such as identity theft. A line that stood out to me is "In the end, ChatGPT or any technology is value-neutral...it's for us humans to make use or misuse it." Nothing in the technology itself is good or bad, but humans have all the power to decide this.
This week I really learned a lot more about how AI is created and the flaws that it currently has. Initially I assumed that AI would threaten a lot of knowledge based roles and displace thousands of workers in the future. However, the NYT articles did a great job of breaking down how AI is built with textual data and neural networks and that the process leads to a few issues. For one, AI models are not forward facing and rely on past data. As a result, AI models, such as ChatGPT, sometimes answer questions incorrectly because they are not pulling from the most up-to-date sources. Furthermore, learning that AI systems essentially train themselves via statistical models has alleviated my concerns about the future because it opened my eyes to the fact that the AI researchers themselves don't fully understand why/how these systems are learned. Until we can pin down the ins and outs of a language learning model, I believe that society is unlikely to use them in situations where human judgement is imperative.
I also read this WSJ article about Open AI's current CEO, Sam Altman. The article contrasts how Mr. Altman publicly claims that his company's main mission is to release a safe not for profit general AI with the company's creation of a profit-generating arm and acceptance of Microsoft's capital. From the start Altman has argued that profit seeking and rigid corporate competition could take AI's development down a dark path. However, it appears that his words don't exactly align with the company's actions. I personally think that Altman started with an altruistic vision that had to be somewhat adjusted to deal with real-life concerns such as being first to market and having the capital needed to train and develop AI models. It does concern me that Altman has made concessions after previously acknowledging their potential negative impacts, but I am glad that he has put profit limits on Microsoft's investment and plans on committing most of the generated profits to the not for profit arm of Open AI. Hopefully these mechanisms will keep ChatGPT on the "good" path to development.
I found learning about how AI such as ChatGPT works very interesting in the second article. I did not know that many new AI models such as ChatGPT uses a new type of neural network known as a transformer model which can analyze multiple pieces of text simultaneously, making AI models more faster and more efficient. Furthermore, the fact that AI models can pick up emergent behaviors which include unexpected skills is very intriguing. It makes me think about how far these emergent behaviors can go or the extent to which they can have negative and unintended consequences.
In the third article , I learned about how AI models are very susceptible to explicit and implicit biases. Most Language models such as ChatGPT are trained on vast amounts of text data and learn to generate responses based on patterns and associations in that data which can often include hate speech, untruths, and propaganda. This can have vast implications for users. As AI systems become more sophisticated and can mimic human language more convincingly, it may become increasingly difficult for consumers to discern whether the information they are receiving is accurate or not. If a user relies on an AI-generated response to make an important decision or to take a certain course of action, there is a risk that the response may be inaccurate.
Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic
This article discusses how OpenAI sent text snippets of graphic and disturbing content to an outsourcing firm in Kenya to get labeled examples of violence and hate speech which could then be fed into AI so that that tool could learn to detect these pieces of information. This article made me think about the tradeoff between efficiency in technology and the potential harm that may be caused to human workers who are working to make AI models better.
As mentioned within the third article in the New York Times series on A.I., artificial intelligence experts, such as Elon Musk, urged for the development of AI to be put on hold until "shared safety protocols" are instated. This raised a key concern society has been having in regards to AI and its development; will these systems take over jobs and crucially how intrusive are these technologies in regards to our privacy.
In this article, Should we be afraid of AI, published by Forbes, the author lays out the reasons why society is scared of AI and if these reasons are well-founded or not. Firstly, the article outlines that AI's transformative nature creates great fascination but also distress, as this transformative power can be positive and negative. However, the article also notes that the general anxiety in regards to AI come from the fear of mass unemployment, concerns about super-intelligence and the power of AI falling into the hands of the wrong people. I read a great analogy within another article which stated that the race to develop AI is similar to the space race during the Cold War, however just with AI.
In my opinion, I do not believe that people should be concerned about artificial intelligence growing too powerful and taking over man-kind (anytime soon). However, a worry I do see as legitimate is the concern of mass-unemployment, as AI will likely take over multiple 'arbitrary' jobs. On the contrary, manual labor such as plumbers will still remain, it is rather the lower half of the middle class/tier jobs which will be affected, such as assistants. Already nowadays companies have started trials with virtual assistants. Thus, the days for these kind of professions are number, however the days of humanity are not, well at least not in regards to AI.
In the article Pausing AI Developments Isn't Enough. We Need to Shut it All Down, the article argues that the risks posed by AI development are too great to allow it to continue. The author suggests that we need to halt AI research altogether and shift our focus to finding more sustainable and ethical solutions to problems. The article points out that AI has already caused harm, from algorithmic bias and discrimination to the potential for autonomous weapons to be developed. Additionally, as AI systems become more complex, it becomes increasingly difficult to predict their behaviour, raising concerns about safety and control.
Some people are against further development of AI because they fear that it will lead to machines taking over jobs and potentially even dominating humanity. There are also concerns about the ethical implications of AI, particularly in terms of privacy, surveillance, and discrimination.
Public opinion on AI varies widely. Some people are excited about the possibilities that AI offers, while others are deeply concerned about the potential risks. Actual literature reviews on AI reflect this diversity of opinion. Some studies focus on the benefits of AI, such as increased efficiency and productivity, while others examine the risks and ethical considerations of AI development.
Overall, I think there is no consensus on whether AI development should be stopped altogether, but there is a growing recognition of the need for responsible development and regulation. As AI continues to advance, it will be important to balance the potential benefits with the potential risks and ensure that AI is developed in a way that is ethical, safe, and beneficial for society as a whole.
I've referenced this article previously, but I still think that one of the most interesting sources having to do with the future of AI/ML is Noam Chomsky's op-ed in the NYT (link). Whilst I am a strong believer in the potential of ChatGPT, I also recognize it is still too early to make sweeping claims about how it will impact society at large, and I think that reading this article is an excellent exercise in realizing why we shouldn't "over-predict" a phenomenon, even if it does pose large ramifications for society. Ironically enough, Chomsky's explanation of the failure of AI to reach what defines human intelligence mirrors strongly with the fundamentals of this course - indeed, his paragraph discussing of AI's inability to process with explanatory frameworks feels as though it was taken straight from a writeup on the Padua Rainbow.
On the other hand, I will cast some mild criticism on Chomsky and suggests that his (rightful?) skepticism of grand claims runs so deeply he falls to the opposite form - a premature dismissal of AI/ML and some claims about its limitations I find slightly hasty. While he is certainly currently right to draw a firm line between ML methods and human language acquisition, it seems a bit unfair to compare the efficacy of a technology still in its relatively early stages with a gargantuan of a bioelectric processor with an evolutionary jump start of hundreds of millennia. As Chomsky claims, we have no idea how the miracle of human language acquisition under limited information works, but this seems to cut in his direction too - because we don't know what sparks actual language acquisition, we can't yet claim AI/ML is incapable of it. Likewise, while it is certainly true current neural network methods provide no clear explanation for produced answers, we have no way of determining that this will be a longstanding problem. My strongest objection is his stance that 1. moral intelligence is a prerequisite of intelligent thinking and 2. that AI has some sort of fundamental limitation on these questions. I think the (frankly optimistic) premise of 1 is sketchy at best, and that it is far too early to determine 2's accuracy.
In short, recognize an unknown for an unknown - feel free to hedge what you think your likely expectations are, but don't claim they're the definitive reality until the event is over.
After reading these articles, my curiosity for large language models grew even bigger, since I found the accuracy and details of the chatbot's generated responses highly impressive. However, at the same time I grew more wary of the potential risks, especially when I read that several tech leaders — some of whom are the drivers of these chatbots themselves — called for a pause on their work for more advanced systems. In particular, I found out that Italy just banned ChatGPT, citing privacy concerns. There was a data breach that exposed conversations and payment details of some users, and regulators have now given OpenAI to 20 days to provide them with possible solutions in order to decide whether to reintroduce ChatGPT back to the country.
I believe that with any new breakthroughs or advancements (not necessarily tech-related), there is always a risk and a potential for negative consequences. I am interested in how exactly we as humans can leverage our non-"alien intelligence" (as quoted from Part 1 of the NYT article series) to regulate AI. I would love to dive deeper into how these companies maximize the reinforcement learning process and how much work the human feedback goes into improving LMMS.
I found the current models of AIs to be an incredibly fascinating tools that would have much benefit to society in the future. After reading the functions that ChatGPT is able to perform, I am very surprised at how accurate and quick it is to get information. ChatGPT is able to condense and research information quickly, thus saving you time from researching for hours that the AI can do in seconds. As AI is continuously evolving, I imagine that many tedious tasks that humans currently do will be done by AIs.
Despite the evolving advantages of AI, there is an ethics concerns with using AIs because of its inaccuracy and its capabilities. In the article that I linked above, a man was accused of a crime because of a botched facial recognition software. The inaccuracies of the AI caused a man lots of grievances which is an ethical issues if the harms of AI is outweighed by the benefit of AI. In addition, with the usage of AIs like ChatGPT, the issue of academic dishonesty is also a potential concern that people are still discussing. Therefore, with the benefits of AI, there are still unresolved ethical issues that are still debating.
https://www.forbes.com/sites/bernardmarr/2023/03/03/the-top-10-limitations-of-chatgpt/?sh=4c6508938f35 Onceead I r this post and its contents, I became very intrigued specifically by the problem of hallucinating, and how the NYT article "10 Ways GPT-4 Is Impressive but Still Flawed" by Metz and Collins referenced ChatGPT providing a website address that did not exist at all given a prompt that asked about present research on cancer. Therefore, I decided to read this Forbes article called "The Top 10 Limitations Of ChatGPT" to better understand the many limitations that exist within ChatGPT, despite the impress advances that have been made to the technology. After reading the Forbes article, some of the limitations were not surprising to me. For example, the article discusses how ChatGPT cannot empathize with others or how, due to the way that ChatGPT is trained, bias may exist in the responses that it provides to users' inputs. I also wonder about how much of a challenge some of the limitations listed in the article may pose. For example, one of the limitations listed is "computational costs and power." Part of me wonders whether these costs are significant enough to actually hinder the expansion of the technology, while another part of me wonders, if these costs are great, how that may impede ChatGPT's expansion and the accessibility of ChatGPT. Because of these costs, it seems as if ChatGPT may become more accessible for companies that are able to afford to integrate it into their operations, while others may not be able to pay the high costs that come with ChatGPT due to the power and resources required to run it.
I was interested by the variation in reactions to these AI developments, particularly as they related to the idea of AI being an "existential threat." This article referenced an open letter that was created just hours after an installation of the newsletter was published, where a group of AI experts and tech leaders urged AI labs to pause work on systems more advanced than GPT-4, due to "profound risks to society and humanity." But the rest of the article suggested that AI is not that smart and therefore not that dangerous, and that companies are currently focused on working on mitigating the problems.
This reminded me of this article: https://80000hours.org/articles/ai-capabilities/. This was published by 80,000 hours, which is a well-known effective altruism / longtermism site — essentially, they focus on similar issues of "existential threat" and "future risks to society" as the rhetoric of the referenced open letter seems to. This article in particular collects advice from various involved participants on whether those worried about AI risk should be working in AI labs on AI capabilities. While it seems like it would straightforwardly solve the dilemma posed by the open letter and the broad worries about dangerous AI, many of the advisers provide distinct and even opposing thoughts — and so this question remains an open problem.
https://www.reuters.com/technology/italy-data-protection-agency-opens-chatgpt-probe-privacy-concerns-2023-03-31/ After reading the article, I took interest in understanding the flaws of a software like GPT-4 and how countries would take to avoiding its consequences. This led to me discovering that Italy had recently banned Chat-GPT, a strict and never-before-seen approach to the dangers of GPT. I personally believe that GPT poses dangers such as not offering confidence intervals on its results and "hallucinating", as stated in the fifth article. Banning GPT prevents these dangers and allows for Italian jobs to be preserved against AI. I, however, believe that GPT will make Italy uncompetitive in the international market.
This is because if other industries in comparable countries like France and Germany are revolutionizing their work using AI, Italian companies will slow in productivity, and even given the ethical concerns Italy raises (for example the "lack of transparency is the real problem" claimed by Johanna Björklund), in the hypercompetitive market it will be impossible for Italy to come close to matching other companies.