Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean, if ai develops to have consciousness maybe what they should do, is give …
ytc_Ugy-CBDth…
G
8:50 So you first said that it's horrible when AI copies things too well, but no…
ytc_UgxUt8aQV…
G
After this interview it's clear now why Google fired him. It had nothing to do w…
ytr_UgwJQjJ3U…
G
Considering that AI images aren't art, it can't replace artists. It just regurgi…
ytc_UgyqSxX7K…
G
Seems like a nonissue. Why would I care if ai trains using what I tell it…
ytc_Ugy7dhEVD…
G
He’s missing a huge point when he says that tech isn’t always deployed as was hi…
ytc_UgxfAk71V…
G
> One would require the U.S. government to be transparent when using AI to in…
rdc_jnmqwai
G
ive been really waiting for your video on AI art for a while, I had a feeling yo…
ytc_Ugxrt4ocw…
Comment
True AGI will, I suspect, is an emergent quality. Currently, things like ChatGPT are very clever parrots who compile and rephrase existing information. However, we have already arrived at what is known when it comes to the "uncanny valley" when one interacts with ChatGPT. Now, we have these systems talking to each other. As this video outlines, we have already started this feedback loop. I do suspect that within 10 to 20 years, we will see the rise of true AGI. It will either be our greatest invention or our last. It will create itself similar to how single cells eventually came together to create complex self-aware lifeforms like ourselves. O
youtube
2025-08-03T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzS41KzSsDHrRtiD8N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyjd5b6ms1KWYMNz294AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtaeX1VExix_1f7hR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx69oKtVmwV2GKS8nl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxdLZoM3CxOUR6LqPB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTiUazd_rBumB203R4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwEV2sdc9DzFMdg05B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_Ugx27DpTQJZZHqwz_pp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKTrcbTS5QKz9zVX14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwg2Ve9JrgrKHL8Ccd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}
]