Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> US takes Canada
With the longest land border in the world and 40 millions …
rdc_mcqkyjc
G
Really nice follow-up! Appreciate you going in to more detail on copyrighted wor…
ytc_UgzEwzYie…
G
AI will be nightmare for humanity ; just look at the current geo-political tussl…
ytc_UgzAdoUTO…
G
There will be only just a bunch of people around, while technology working for t…
ytc_UgwrSGa0t…
G
Work in IT consulting and yea, pretty much. The trend is shifting away from a la…
rdc_glhtbky
G
The only problem with the whole 'skynet' theme is that it presumes that no gover…
ytc_Ugz06NtLd…
G
remember the comic where a ai out lives humanity floats through the universe asc…
ytc_UgzTpeOgk…
G
comments are stupid. tesla is smart enough to know if there is a lot of cars the…
ytc_Ugw_XvSLH…
Comment
Even a simple AI with good intentions could be problematic.
Imagine the Gov. establishes an AI to watch the overnight shift to see if any foreign invaders try to attack at night.
Only to neglect to properly code the AI that exceptions should be made on July 4th.
Anyone whos done even rudimentary coding knows what you intend for it to do and what happens will be worlds apart for an undetermined interval.
The problem with AI is you would have to be extremally careful. Something the Gov. increasingly as of late has not been concerned with.
youtube
AI Governance
2023-04-18T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxJapG0m3i_j-14D-Z4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugw0MDIrdu13LWL9moB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwZsBPDbHjAkHPykJN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJFqQWvY_KPnPvECJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw_xETqovaLrlmUy9p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyVDS6rvCpk76wCl594AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzrPSsvP2kwG56LfA54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzkzwjlbDhIKbipbY94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzI9r_Pu7xKVCOsLah4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzSkL_I2x6atir6ZSh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]