Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People are worried that AI will take jobs away. If we, clinicians, do not equip …
ytc_UgyvMpGHS…
G
Yeah my 62 Yr old uncle asked ai how to descale his kettle it told him to put 2…
ytc_Ugxbtj-HV…
G
It's already happening threw the Brain Initiative. The US government has a self…
ytc_Ugzp5ssls…
G
Every time I heard some crap about "AI allow disabled people to create art", I r…
ytc_UgyfnRA8c…
G
1000 A.D. : you must have sword
1800 A.D. : you must have gun
2000 A.D. : you mu…
ytc_UgxvFATDM…
G
I believe major companies will try to push AI "movies" or "Shows" once or twice …
ytc_UgxNbxp4V…
G
these guys are so lazy that they ask the a.i to give them a better prompt XD, th…
ytc_UgzZ09sxi…
G
Eventually, many people could work by using VR/RC to teleoperate a robot from t…
ytr_Ugyzvs-0A…
Comment
Never ever, the scientist have responsible to prevent the robots. That's it, we don't have nothing to fear about any robotic things.
If we can make robots, it will be a great achievement for human race.
I thought that is a good things to help humanity.
But, on the other hand if robot were successfully made there can be very disaster for the poor people that who don't have a job Or labour. Because the AI will be more comparative and far way more doing better than human does... Then many companies will be more likely to use robot instead of human..
youtube
AI Governance
2024-02-02T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwUskLnBLAOknJkgQ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxiI673b_R_gZjuaw54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxIPWHrwb6H50VMdRF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5eIWMYHAc-6gPWbF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsKE_awP7ODo2zeaR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRoTtpTFa1Sr5WFmV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyttExFUjXJR_gi_v14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz3NfItEzZKpg-yRh54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzmjISoR37cmwMIpIN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxX34vJKfDvEMWPP7N4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]