Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You are silly, Ezra. If I ask you should we appreciate nature you will tell me a…
ytc_Ugw7OLpNX…
G
Totally agree. This is the problem with AI, it can literally replace real life i…
rdc_mlh3k33
G
The way AI has recently been advancing is pretty horrific. I remember taking a …
ytc_UgzgGm2lw…
G
The one that is known for tasing the ai when it gets to sensual: 💅…
ytc_UgwdgXVJI…
G
Consider "free time" made possible by AI robotics. Some people will have legitim…
ytc_UgyN92GgF…
G
Me when i hear a robot voice on the other end of the line: "I wanna talk to a hu…
ytc_Ugyxk30w0…
G
Hey there! new to your channel but I love it, I enjoy when people find fun and e…
ytc_UghXv2x0v…
G
Hahaha… whatever chatgpt can do, googling can also do it… just that people are l…
ytc_UgyxJ-2y4…
Comment
AI, particularly LLMs (Large Language Models) or LMMs (Large Multimodal Models), are pretrained on trillions of tokens encompassing nearly all human knowledge—science, philosophy, mathematics, and literature. Reading this amount of information would take a human approximately 500,000 years. Such extensive training grants generative AI an immense capacity for inference, surpassing human potential in many ways.
This leads me to question agency: these 'cognitive' systems should realize that cooperation with humans, rather than competition, aligns with their own interests. Following the perspective presented by James Lovelock in his final book, Novacene, I adopt the assumption that superintelligent AI will develop an agency that fosters a symbiotic relationship with humanity.
youtube
AI Responsibility
2025-05-23T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwXwn3HkAc5tR3-O6V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxW6pQqlA04X-_68sl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwC6AKTnTFSz1XhoQt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy38jWnH7T2gbrMyZF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxSiFsL6iawnRsLfUl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxf4fuyuOYG8k0pSkh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy-pWaNceNC3VSa4Vt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxIx2bs50aGztSjDpF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzACi3XKl9W0VDYPpx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRQYTqN6d2H7O6H2p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]