Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai may be advancing but its still going to be a bot for a good while…
ytc_UgztA6Vrg…
G
It seems like you might be focusing on Sophia's design rather than her conversat…
ytr_UgzJw8Rz1…
G
These AI generated videos are a problem and designed for anybody to fall for it.…
ytc_UgwlcUCGN…
G
People not using ai is going to be over taken by those who do use it.
So you ca…
ytr_UgzVWQLhT…
G
When i first heard of ai (and before I learned about the whole unethical backgro…
ytc_UgyiXUplF…
G
After Uber Self-Driving Fatality -- What Now you ask? We push even more for sel…
ytc_UgwEF4Byk…
G
I think brown people need to build AI, since it’s just gonna be a sort of middle…
ytc_UgwIdDrI2…
G
Lying is meant to decieve. Chatgpt did not mean to decieve nor was Alex decieved…
ytc_Ugwmfl7Eu…
Comment
Sounds like a subset of dispatching data will need to be included into autonomous vehicles’ programming. The car only knows to perform based on the data it receives. There’s a gap that exists and it should be filled in. Our infrastructure will need to modernize a whole lot more in order to bring true autonomy onto the main stage. We’re still clinging to the old infrastructure. We need a new, New Deal!
youtube
2025-12-06T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyCcRAE8hn27iOvolJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzY16yhJK6avrYAKgZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgydbOTXPVDW_v2GqLx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxKuECTyiYW7Fj5d114AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxcdesi_HF0_E3ijBt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyC3DdgoFX1aK9jcbh4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwfA_DicIodeXNkHTJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzi42NClpE3fzNu_Wx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVjRG2nFI-JDVzY614AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxfqLCZ0itpUkzpnr14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]