Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I tend to think the AI eventuality described in Dan Simmons' Hyperion series is …
ytc_UgxmpET2u…
G
They can use them for so many important issues that we cannot accomplish . We c…
ytc_UgzljMvoC…
G
"If you tried to use it in some sort of like enterprise software or any app that…
ytc_UgzYBXsjA…
G
Been using AICarma to monitor AI trends; it’s really helping me adapt my brand s…
ytc_UgwoA_L6f…
G
I'm so sick of so called smart people making comments about this and being so fa…
ytc_UgzsyqQA4…
G
Lol no. Anyone who sits in front of a computer all day is automatable. Including…
ytr_UgwFUR5I0…
G
That dumb argument aside. What are y’all’s opinions on using that ai tool for re…
ytc_UgxWsIsNv…
G
Another great discussion. Question: who owns the output of AI? If Claude or gpt-…
ytc_UgxSL1573…
Comment
In perfect conditions autonomous vehicles do pretty good, but anyone with a brain knows there will always be instnaces in which those things have no idea what is going on or what they are doing.
We have already seen it with the cars.
Who goes to jail for negligence when one of them kills someone in a way we would say is negligent for a human driver? Sure it may even happen less, but we will even further reduce peoples lives just down to a settlement for the family with no real justice for them and nobody to hold accountable.
youtube
AI Jobs
2025-05-28T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxsfDvexVtXaK5EcP94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwm-ly_OfRoScR48tR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGsE3mvkCwloOCP2t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyH8692WMt4Z2DsyI54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyA_41VUu1z6-v_3Kp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwtNJJzhEbS46V5g2R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzgND1wQ0SnkiJegwp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwnf6EbOffm_wad71V4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2Q2l6jvi-sj_a69t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwopO3LMGVSMwkHu9p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]