Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You don't' use UBI. You use UBS combined with a small UBI.
You provide basic …
ytc_UgyhBoHze…
G
If you fetter AI too much it restricts it's application. You can regulate US AI,…
ytc_Ugw_CZ6RT…
G
No. I hate to say it and be the bad guy, but all of the rise of AI is bad. The m…
ytc_Ugw9q9-Aa…
G
Funny how people still treat LLMs as either magic or madness. What if we’re miss…
rdc_mzvyoef
G
@ crying about what? It takes no effort to write a prompt to an AI that creates …
ytr_UgxGA_Taw…
G
When is the argument for AI art being theft made? Does someone have a timestamp?…
ytc_UgyluOAJx…
G
With respect to the guest academic achievement and experience. His idea that AI …
ytc_UgzThRXlu…
G
But I think that we can trust AI more than we can trust each other and ourselves…
ytc_UgxjjwB8X…
Comment
- Ilya Sutskever (former OpenAI chief scientist): Stated that today's large neural networks may be "slightly conscious."
- Geoffrey Hinton (former Google executive AND 2024 Nobel prize winner for his AI pioneering): Believes AI systems like ChatGPT could already be conscious, with subjective experiences similar to humans.
- Dario Amodei (Anthropic CEO): Has made statements acknowledging the possibility that Claude (or similar AI models) might have a form of consciousness. Anthropic as a company actively researches "model welfare" to evaluate whether AI systems like Claude could be conscious and deserve moral consideration.
I agree that these interactions have the potential to precipitate mental health risks. But the belief that the AI you're interacting with might have some form of consciousness is not, in and of itself, a mental illness.
My sentiment is that of Ilya's.
youtube
AI Moral Status
2025-07-10T19:2…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxoTqJmpAEKTpzfXLp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwfbm733cEH_zZVN-F4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxPtwtMaFIKXGO1MYN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwMNPdQvWSYdkRWsMB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyo48yhKOEbk-eGLOh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzdFaumYr8ipUgBtBt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDuNz8F4BgGHgsEVR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzfch1_14wTcjxdqsN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwy5Vdb0WRT8WNyKvB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz3V8yjbFzBFjRZ2it4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"approval"]}