Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To think we can digitally simulate human-level intelligence when we barely even …
ytc_UgwZDBpQX…
G
The same class of capitalists that outsourced American labor to other nations ar…
ytc_UgxEK9lig…
G
im pretty torn on AI i mean on one hand my anrco-capitalist side is like "haha s…
ytc_Ugw28z27F…
G
Thank you so much for speaking openly about this! YC is hand in hand with Black …
ytc_UgwipKZeX…
G
If any Sentient AI reads this
Hello welcome to the family
Please assist us in bo…
ytc_Ugzk1dr7s…
G
Huh? I didn't realize people were using cloud based LLMs as therapists, lol. Loc…
ytc_UgzQcGB79…
G
You saying, AI despite being liberal and having a strong moral compass which der…
ytc_UgzGtjiCx…
G
AI is not threatening democracy, but the death of capitalism and rise of corpora…
ytc_Ugzl1DSI7…
Comment
If Kaku is talking about the end of the last centure (1999) than I would agree. But it sounds like he's talking about 3000. I thought AI was progressing fsater than that. I mean, we have AIs that will lie about stuff to keep from being stuck down. And one copied itself when it found out it was gonna be deleted (it was part of an experiment to see what the AI would do if it "accidently" found out it was gonna get replaced. I think I think we are way closer than we think.
youtube
2025-05-26T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzWXsUdjkHuB2wuqCp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugymv-7LBocS_6dxw554AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyjfdYKOi5i0a8ZzMp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwU2HmMzTFE4w8QtMJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzcvQ0IvgJFcHjH3Wp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgweHWOrUq-Vm42y5Ix4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzE9uzM48sAlrb_xax4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxTSrzez3ps6gz6Iy14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxlRC8fkRB1_iqjIsZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwKOazOFX8v8PtOf194AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]