What do you, personally, think you will remember most about this interview a year from now? (10 points)
Bearing in mind all the conversations about the limitations/opacity of AI this year, what will likely stick with me the most is the cautionary tale about the limitations of machine learning models. As Schneiderman aptly points out, we cannot rely on these models as infallible predictors. Despite their apparent "potency," a word he uses to describe the capacity (or lack thereof) of AI, there's an inherent fallacy in AI systems, compounded by the opaqueness of their programming. The concept of the "black box" of AI, where even experts struggle to fully comprehend the inner workings of these algorithms, serves as a stark reminder of the need for critical thinking and skepticism when it comes to embracing technological solutions.
How do you think any aspect of the interview will affect your own future, or society's future? (30 points) Considering the insights shared during the interview, particularly Schneiderman's cautionary remarks about the role of machines in our lives, I foresee a lasting impact on my trust in AI technology. His reminder that machines are tools, not partners, reminds us to remain skeptical of the promises of AI and avoid falling into the trap of "dangerous pro-AI thinking," as he noted. I'll continue to approach AI with a critical lens, recognizing its potential pitfalls and complexities. The issues surrounding AI, notably shown in recent incidents with Tesla's AI driving features, reinforce my decision to never blindly trust technology. I also appreciated his points on "wording" and being prudent about what words we use when talking about AI. We should refrain from conflating AI with human-related qualities and actions, a note I will keep in mind as I continue to learn more about the technology (something we should all remember). As a writer (and possibly a future journalist), I will be wary of using inherently "human" words when describing AI.