Following up on my post summarizing the most interesting takeaway from the interview with professor Gilbert, I would like to ask how AI could help make understanding of predictive models and nudging efforts in general more effective. Gilbert mentions that people generally act more desirable when they are not being bombarded with information (for example, climate crisis “doomsday” information), but rather when they get tangible information with clear instructions on how to act prosocially. My question(s) is/are: What possible role could AI have in customizing the information that reaches consumers to best have them act in a certain way? Would AI adapting information for users based on their understanding of topics be ethical? Can “perfect nudging” (indirectly forcing people to make a decision or think a certain way) with the help of AI ever be unethical if it is for a great cause?
top of page
bottom of page