Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Trump "trust me bros ai wont makevthe electricity go up a lot just like tariff h…
ytc_UgwNCG_D5…
G
I would suggest you should interview people you know something about the subject…
ytc_Ugy7sC_4Z…
G
I could tell when the video started that this comment section would be full of d…
ytc_Ugx259TQs…
G
I don't see AI replacing Doctors for the simple reason that government regulatio…
ytc_UgwOIw2VZ…
G
This is exactly why everyone should carry around a small sketch book and do just…
ytc_UgzS0D7a1…
G
Its a hellova driverless 80.000 lb Dirty Bomb delivery system ... Domestic terro…
ytc_Ugx2GRydO…
G
12:25 I actually got it to admit that it had done that exact thing, but I had t…
ytc_Ugz0DGM3z…
G
@gondoravalon7540what ramifications would affect how humans create by making AI…
ytr_Ugz8bOCyY…
Comment
I love how in the FAQs, the third thing is "What are hallucinations?" And it goes on to describe how Google's AI can be so confidently wrong. Then tells you how to Google without AI to really get the info you seek.
reddit
AI Surveillance
1739633223.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mcunn8m","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_mcwvgjz","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_mcxecld","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"rdc_mrpxp74","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_mcxtqz3","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]