Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is this ancient video or Ai generated?) Elon talks about twitter as it is now...…
ytc_UgzHAD1AD…
G
@SRMoore1178 A decade ago we'd have called current LLMs AGI, and that level was …
ytr_UgzOCN2id…
G
What is the most horrifying thing in the world giving a robot a gun that knows h…
ytc_UgyupYAVa…
G
@chrysalis1670 tool are already released that tranied of massive pile of data o…
ytr_UgySRfSi9…
G
I hugely respect Tyson. I am more hesitant than he is about usage of AI but NOT …
ytc_UgxW95hUy…
G
If so many companies go to driverless trucks and they don’t make mistakes, there…
ytc_Ugw_aNGBG…
G
Does ChatGPT actually use user chats as data for training? Because that feels li…
ytc_UgwvQLz9W…
G
I think people are misunderstanding the word supervised. That is why you are the…
ytc_UgwX2oYm1…
Comment
All this "Super intelligent AI" stuff is a load of crap designed to make sure that people who don't like AI are still repeating it's talking points.
What we have right now is not an AGI. It never will be. All you are doing with videos to the contrary is letting the very real criminals like Elon Musk and Sam Altman off the hook for the damage their little delusion machine is causing right now.
youtube
AI Moral Status
2025-12-15T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyJ6KhkiiT28INviGV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz17sDuD87KAsYNveJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxmiKaNZPHQGGA0Cs14AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzyYIw9CHSy04O0_Lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyxios-b4xFg0iUgvF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy1TfHbxGrQOvOZz7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyBzCJQJSmViQiFH814AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy6Qd-0aWc6Yi5-UjZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYxXnmw-z6ZLAMMOR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRNFvuoNk1P8DSDrJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]