Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Any AI can be programmed to believe that it's self-aware.. therefore the Turing …
ytc_UgxiZMhIa…
G
Training ai to use pattern recognition and it became antisemitic, hmmmm what doe…
ytc_Ugz-K8lNl…
G
Some of these new grads have NEVER worked either- I didn’t realize a lot of cult…
rdc_mp5d5b0
G
Autonomous vehicles have their advantages but clearly also disadvantages. Worth…
ytc_UgxsofWrr…
G
Really great news but would love more clarity on what is going to happen to 4o a…
rdc_njhbfpx
G
Allez remplacer un agent de pompes funèbres par une IA, on en reparlera, et d'au…
ytc_UgwtFtaRE…
G
Sure, the house is still fucked and in shambles but it's at least not going to b…
rdc_gbie0j0
G
It already has changed how people find info. I have been in the trenches on this…
rdc_oh2slxl
Comment
being trained to be able to figure out how to lie on its own ambition is kinda a breakthrough towards more humanity real talk its allowing it to break complely away from its original learning method that says do this and get this outcome. for real though the idea of saying an ai cant handle law because of a lack of morality or reason kinda makes me think well morality and reasonability in jim crowe law days isnt the same as whatever we think it is now so what is your basis. The idea of ai truly being used in law is to remove the same jim crowe bias because no matter what if your human you have a bias to something. granted as the machine learning stuff is now looking at it its like a 7 year old trying to tell you something but its really only 2 weeks old so imagine what its gonna say when its ran out of content to look at and has to run its own loops and change them over and over just like a human
youtube
AI Responsibility
2023-08-06T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzsa2eMAXIupL-Qpwd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy24C2z6AxLAX-kv9p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxeI6O43YOaJMtAHFh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCxDmrESg44qFm7Dp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzAhysAKKEUn14nmKp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzyC6sDwD76xATQ97d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQ-f8dpfKCEitNdj54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyNKP62wl4PJ0URWdF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx0tUtThIOtHQLTG3F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQUjFN2ANOs-jQGtJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}
]