Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Summary — “The Only 5 Jobs That Will Remain in 2030” (based on Dr. Roman Yampols…
ytc_UgyRsOt6Q…
G
But if we lose our jobs we do not buy goods. Henry Ford needed his workers to b…
ytc_UgxUGweC8…
G
I am finding hard to use AI to troubleshoot my problems, sometimes it hallucinat…
ytc_UgxblKZaL…
G
We have no idea how to make true AI. It’d be like engineering a human brain, we …
ytr_UgxjoKqrj…
G
You assume that AI will have the same kind of consciousness a human would have. …
ytc_UgwdfYsHb…
G
i dont think everyone understands how much his testimony could have crippled ope…
ytc_UgwqTmX24…
G
I would be worried that rich people use machines and take control over the means…
ytc_Ugxo_u6lA…
G
I imagine that he uses ChatGPT frequently, and talks/researches a lot about thes…
ytc_UgyE3YifH…
Comment
The Hanson Robotics CEO misspoke. He should have said "simulate" being as conscious, creative as a human. He's playing into the mass ignorance about the nature of AI and computers, which are mechanical, adding-machine based entities. All the creativity and consciousness comes from their creators. That's the "art". The burden of proof is on him. He is either dumb, gullible, or irresponsible. (I recommend "What Computers Still Can't Do: A Critique of Artificial Reason" by Hubert L. Dreyfus, MIT Press – one of many good books on the philosophy of AI).
youtube
AI Moral Status
2018-02-12T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwEyiGTu1hCNU0GCe14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx3c6044UZ7e97ebkB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwcRIMaeDI_YdxDzLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMYc9A0xxrXgMKheF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOGq-TZgVbOd8CtF94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyjZrrLyu0mEnQfxeh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyth7X7E93HMOO2IRl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyrKBbXNxfw7Z919vd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxuLeI11j9v2G1wtL14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw9jSUGNPw1P6DYgoB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]