"Problem solving with Human-AI interaction" presented by Alex Davis
- Shared screen with speaker view

17:26
Happy to be here!

18:11
o https://www.cmu.edu/epp/news/2020/epp-faculty-seminar-series.html

42:03
Are these criteria not correlated in some way?

44:42
Does some mismatch between subjective judgement, instrument behaviour and specification of parameters account for the lack of replicability?

49:11
The cube has a circumference of 80mm whereas the cylinder has one of 31.4mm. Traversing time is more than doubled. Does that not effect how one layer adheres to the next?

01:14:02
You cannot expect consistency in preference sets

01:26:58
Do you have thoughts on how well the group’s recommendation will line up with the maximum utility function? That is, I would expect that some participants will be more outspoken than others and have a greater impact on the recommendation, but that may not always reflect the strength of everyone’s preferences

01:28:33
Neat. thanks!

01:28:56
Piggybacking off of Kristen, fairness in ML is a growing field, some issues stemming from disparities in data collection. Have you looked at this in your utility work?

01:29:11
Alex a great talk as usual — thank you!

01:30:24
Any thoughts on how AI-expert interaction good improve decision making in early stage rD pro9jects?

01:30:27
o https://www.cmu.edu/epp/news/2020/epp-faculty-seminar-series.html

01:31:02
IV-III=I