Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just like any invention we must strive to keep the "human component" alive. AI i…
ytc_Ugzfwwy_Z…
G
I haven't been a student for almost 10 years at this point, but I was a math maj…
ytr_Ugzd0Tc-W…
G
We appreciate your interest in Sophia and the topic of human-robot relationships…
ytr_UgzYgWgwo…
G
So you're saying you're okay with self driving cars being as accident-prone as …
ytr_UgwPpqFwJ…
G
No matter what, I will not ride on Waymo. They stop in the intersection whil…
ytc_UgzzzqP3s…
G
Han is making jokes about the robot apocalypse and people in the crowd are just …
ytc_UgyzhdYv7…
G
Do you see photographers claim they're illustrators or painters? They capture pi…
ytr_Ugx1UBA5J…
G
Panelists and positions, in brief:
Latanya Sweeney (Harvard University, Profess…
ytc_Ugy8X2JzG…
Comment
AI could go terribly wrong. When humans are born they are in a neutral behavior and learns though life the good an bad behaviors. Ai must have reasoning to be human like with good behaviors otherwise ot could see other potentials and could also turn into a mess and that could be a problem to unlearn. We are not perfect and AI might want to attack that thinking its superior and will learn from its own mistakes but not in human reasoning. That will be most difficult. AI will think with logic and not feeling.
youtube
AI Governance
2024-02-10T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx1o3edqVy9vlNkWFF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwFkJ-BOM7KWbfx0Q94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxxXT3Pz2LjwcD0Zp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbK94BgVK9K1011vd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxnkgYwU_zvxyONVlZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxRmpWszS5aEX79ijF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgzAz981Y5JQjrl4PW94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3RCxoiXZwPaYMYOB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx3duYAeKkJUb5jpAB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJ2EHa2ZZ2FTaIyNJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"approval"}
]