Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I understand where you're coming from! The interaction with AI can definitely fe…
ytr_UgyA9sFAP…
G
AI that are watching this video be like : Oooh so that's what I lack nvm will im…
ytc_Ugwt6RE4n…
G
AI is bad for the environment. So is the streaming (movie, shows, music, etc.) …
ytc_UgyFXH1Gs…
G
Is it just me or why do I think the robot who side eyed the camera Doja Cat? 😭…
ytc_UgxPjCnPt…
G
Exactly I agree Elon has warned of this from day one If Elon says it needs regul…
ytc_UgxDyfW-6…
G
NO!!! 🤔, I think THAT was an actual person. Made up to appear like an Ai- robot!…
ytc_Ugzvht9NU…
G
People can use AI to copy Ghibli all they want but at the end of the day that’s …
ytc_UgwWKbMBj…
G
If I had known 10 years ago that AI would make my career as a data analyst and i…
ytc_UgzXMxMSp…
Comment
This was very poorly moderated, and I think it was obviously engineered in favor of the pro-moratorium camp.
1. Both of the con experts were selected from Facebook, who has a very poor public perception in their historical use of AI—The Social Dilemma.
2. Tegmark was given the last word in every segment. Everyone trusts the MIT professor, right?
3. The moderator was outright arguing for the pro team.
4. They never touched on the what-if’s around who exactly chooses whom should be given the right to develop AI. Is it not convenient for the Mega corps developing closed AI systems to enjoy regulatory capture?
5. There is one more issue I simply cannot highlight without risking having cancel culture unleashed on me, but the fact you know what I am referring to is proof positive the tactic worked.
youtube
AI Governance
2023-09-24T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwVDGcwfE1NYW9XH-Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwh8RCr3PCRf9gZv2N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwF-19EE07RA1K7ctJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzWDaIPsZWzKTE5m2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsbOyMmAMrNEnLT-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz7DCi196hY0sr-9sJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwTWARgml9yte_J2n54AaABAg","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxXOLGG4GygC5P_cG94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwNo-P3LDetGzQSm2N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0p9CUmHlcvEPWeyB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]