Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
San Francisco with all the smart devices in everyone's home come on now ai just …
ytc_UgxQjBBi1…
G
Vax reduced transmission rates, but yeah check your diagnosis before going. Lots…
ytr_Ugy_CoMi1…
G
imagine teaming with chat GPT to make humans better and making us cyborgs. flesh…
ytc_UgxV24L1A…
G
@ImNotPregnatImMaleFerretYou are just another talented artist dismissing the str…
ytr_UgwRc2s96…
G
@elemenar232 Yes, AI will keep automating employees daily tasks, eventually eit…
ytr_Ugw64DYvD…
G
I'm not kind with the ai because I want it to do something for me bit because I …
ytc_UgzZ4GMGM…
G
Ugh. This is one of the use cases my company wants ai for. I keep trying to get …
rdc_n9iqzms
G
I constantly test and work with the very latest best of the best paid AI models.…
ytc_UgyMPfH5t…
Comment
Ai is never gonna get dangerous or anything. We don’t give it a body and it doesn’t have a real brain, it can’t even think it can only predict sequences based on knowledge. Ai can have all the knowledge in the world, but that doesn’t make it smart. ChatGPT has no idea what he’s saying, if you tell them they’re wrong and give them a wrong answer and tell them that’s the right answer, ChatGPT just blindly agrees.
youtube
AI Responsibility
2023-10-18T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxuxJA1WUhcQMBNI814AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyPFBmrsPr8HLO_KOR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoaNik1hEB2rTTLxJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwuiOyAgCCOihGraAZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwARruJihzVBbcadFt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6cEUJlaek2cW_yj54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxN0f1j-Q2kpt6Ekdd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxgwfmv47G-V0U1eNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzl4jtQEtQinbF9LHN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3iKtoQX0hnh9MQPp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]