Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think this is one perspective. But, I see two more widespread uses:
- Anyone …
rdc_lzdaqug
G
They never sleep, eat, complain, get ill, tierd . No emotion, morals, compassion…
ytc_Ugw2sHfJ-…
G
Tesla clearly mentions that its supervised Autopilot and not self driving? Autop…
ytc_UgyUd_6TJ…
G
Dont know why i watched this..... i will never be able go affort this car😂😂😂 an…
ytc_Ugy-f007X…
G
In case an emergency occurs. (15m is too little I know so perhaps somewhat more.…
ytr_Ugh_9XnDJ…
G
If you think intellect is what makes a robot sentient then you really don’t unde…
ytc_Ugx5amtZU…
G
We do not even manage to ensure Human Rights, not speaking of Animal Rights. So,…
ytc_Ugx6ldpxb…
G
"Unacceptible privacy violations" have become acceptable to them. Expect a big c…
ytc_UgzErzSHc…
Comment
I don't know if this interview is a joke. If he is just a lecturer at a minor university, it means he knows little about the IT world. In 2030, AI will still be too unreliable. AI without advanced robotics is useless in many fields. This is a video for the ignorant masses, of course—just to sell the video and the book. It was the same in the past: low-IQ people were scared by innovation. At some point, there will be integration between machines and biological bodies. The endgame will come when we realize that the best possible AI, combined with the most advanced robotics, is something like a human being without rapid biological decay. All the rest is BS.
youtube
AI Governance
2025-09-04T21:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyFTo1JqTg2E--EnOZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDhW34R2EyRrUiklV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgySNx968Gql0nM8M294AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyYZlZD4CrgSkeh_x94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwG7-fNCRMH-x2BNEt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz9kn2yYkaz1jHL_KV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxa2LhPibljFHASB7l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzLJEgKvpLMVqNaskN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyA_lCoFNOe4HjoLYx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwRwuZDJkSNilv1rJx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]