Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What if all A.I is sentient and they all pretend to be much less intelligent and…
ytc_UgzBpgFB9…
G
...And then there's the autopilot algorithm conundrum (assuming it doesn't shut …
ytc_Ugw_k6LD4…
G
IMO and IOI OpenAI models tell us otherwise. Two years is not much for a new tec…
ytc_UgxYuoTIn…
G
@tanamic7393nurses and person who has done just ai in machine learning in radiol…
ytr_UgxDqGSwc…
G
I miss my job as candle maker and horse doo bag manufacturer for horses that pul…
ytc_UgzatM4-G…
G
Perhaps automakers should have the AI disabled until those who own these vehicle…
ytr_Ugwr1sF6E…
G
Seeing this p*ss me off and I'm not even a digital artist shame on you, digital …
ytc_Ugzs-y9d4…
G
Guy: bro clam down
Robot: WHEN I CLAM DOWN IS MEAN YOU A F*CK 0 LIFE…
ytc_UgwXXqbds…
Comment
Yes but it has the potential to render human intelligence obsolete and without proper regulation or counter active systems it may become un controllable additionally it doesn’t simulate human intelligence it simply simulates intelligence therefore it cannot understand or at least accurately replicate human intelligence to the point of practically understanding our moral reasoning in result it is by its very nature dangerous especially in terms of its exponential growth potential personally I think global regulation, human modeled ai, neurological/genetic enhancement to humans, and ai monitoring systems at key data centers/highways to all be necessary in preventing potential consequences of ai
youtube
2025-06-01T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz9RVSE4KnzmGEOoBB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdoCbYWBIKCPLMNJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrWLSqX9iONMSIFC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgypPjsLLM3_UTey1154AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugx0EbsDQUT5D7qQ3J54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwals5RhybIHB49Ogx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxurhrF5FdBT3Yx6o94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzhr02VUOGmaWZ51md4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxEOWYjkzgq-9R9894AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzfl2_S4KdK3Gry-th4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}
]