Surprising Information: I was very surprised by the focus on the language that we use to describe technology. For example, Ben Shneiderman discussed machines not being partners, and rather that they’re tools, and how the language that we use to describe machines can be concerning. I had not expected the language that we use to refer to machines would be considered controversial. However, I can now understand how machines being considered as partners or collaborators might seem dangerous, but I found Ben Shneiderman’s consideration of the language we use towards machines to be fascinating. Another example of language that was talked about was “mimic” and how iPhones do not mimic music, but rather are tools in Ben Shneiderman’s view. I think that we oftentimes consider the implications of technologies we develop and how they may affect us in the future, but we do not as often pause to think about how we describe technology, so I enjoyed this aspect of the discussion.
Additional Questions: Considering how adding more data can make a prediction less accurate, I would love to ask more about how can we determine whether data should or shouldn’t be used in a prediction. For example, if some sort of data contributes very little to the accuracy of a prediction, but still may contribute, how do we determine whether to add this data to a prediction or not? Or, how do we know whether data will make a prediction better or worse if we cannot be sure about its impact on the phenomena that we are predicting? Also, how are data weighted in predictions, since some data will be much more relevant to a prediction than others? I would love to learn about these questions because they seem to be very important for making accurate predictions. I am sure the answers are also very complex, so they may be difficult to answer, but I still would love to have more insight to their answers to better understanding machine learning in general and how the data that we choose may affect machine learning's applications and effectiveness more broadly.
I agree that the distinction between tool and partner was an interesting one because I also don't think about the framing of technology much in my day to day life. One that that I'm curious about is how that distinction defines the impacts of technology. Do you think that framing AI as a partner, rather than a tool, would significantly change the course of AI's impact on the world? From my vantage point, I believe the answer is no but this video has made aware of the fact that I need to learn more.
I also think your question on how data should be discerned as being needed or left out is a good one! I'd like to think that the criteria we decide on is some combination of the data's efficacy and the willingness of its sources to give up their privacy.