Logo

"Problem solving with Human-AI interaction" presented by Alex Davis - Shared screen with speaker view
David Rounce
17:26
Happy to be here!
Peter Adams
18:11
o https://www.cmu.edu/epp/news/2020/epp-faculty-seminar-series.html
Hadi Dowlatabadi
42:03
Are these criteria not correlated in some way?
Hadi Dowlatabadi
44:42
Does some mismatch between subjective judgement, instrument behaviour and specification of parameters account for the lack of replicability?
Hadi Dowlatabadi
49:11
The cube has a circumference of 80mm whereas the cylinder has one of 31.4mm. Traversing time is more than doubled. Does that not effect how one layer adheres to the next?
Hadi Dowlatabadi
01:14:02
You cannot expect consistency in preference sets
Kristen Allen
01:26:58
Do you have thoughts on how well the group’s recommendation will line up with the maximum utility function? That is, I would expect that some participants will be more outspoken than others and have a greater impact on the recommendation, but that may not always reflect the strength of everyone’s preferences
Kristen Allen
01:28:33
Neat. thanks!
Octavio Mesner
01:28:56
Piggybacking off of Kristen, fairness in ML is a growing field, some issues stemming from disparities in data collection. Have you looked at this in your utility work?
Hadi Dowlatabadi
01:29:11
Alex a great talk as usual — thank you!
Matthew Wagner
01:30:24
Any thoughts on how AI-expert interaction good improve decision making in early stage rD pro9jects?
Peter Adams
01:30:27
o https://www.cmu.edu/epp/news/2020/epp-faculty-seminar-series.html
Octavio Mesner
01:31:02
IV-III=I