Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They will do this again at first sign of further automation.
You mean nothing to…
ytc_UgxZtjTbY…
G
@Beachdawg1996.....ALL forms of EVIL, from all over the world, has been used to…
ytr_UgyBZI0-c…
G
I don't think police should use facial recognition software unless the crime is …
ytc_UgxVRp0ss…
G
Don't think so.... I'm sorry for the confusion, but as an AI developed by OpenAI…
ytc_Ugz2BQ5EB…
G
@DaVioletShark Human brain just automatically corrects the word if the first and…
ytr_UgxKkiCdp…
G
Disabled artist here 🩷
My art is the thing that kept me housed and fed during t…
ytc_UgyC9YWsX…
G
sad news for real workers....... but id be ok if AI replaced BP pundit…
ytc_Ugz_C95k2…
G
Okay yeah, AI is bad definitely. Especially for the creatives and everyone who w…
ytc_UgypN0aBT…
Comment
I'm not sure about motives, other than the obvious (technically sweet, lucrative, exciting, empowering), but the AI accelerationists are a consistently dishonest group. Pro-regulation is not a self-interested position as far as I can tell, giving that position a higher a priori presumption of honesty. At the more granular level, argument by argument, position by position, the regulators have a much better case. There are an unlimited number of ways that introducing powerful alien intelligences into our civilization will by dangerous, nor can we confidently ascribe some limit to just how dangerous they may be, nor do we know the threshold at which various levels of threat will arise. The philosophers invented a term to describe in part our epistemological position with respect to emergent AI: anosognosia, the inability to know what we do not know, or, in common parlance, the unknown unknowns.
youtube
AI Governance
2023-06-27T13:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgwUti3nKWArqPeZ-Ut4AaABAg.9rSWSm0Wp_o9rU7yy3LPgi","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytr_Ugx-fWVIjvGigcWWvcx4AaABAg.9rSQLjpvTdp9rTMs4P91UI","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx-fWVIjvGigcWWvcx4AaABAg.9rSQLjpvTdp9rVp-Q853sI","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugwzbk-4P9eZqRv4nad4AaABAg.9rRUHxiVrrD9rUHoe6rc-j","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugz-xaGPm3D8c0ixwBJ4AaABAg.9rRKEDkOEV39rTEdR_qqHb","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugz-xaGPm3D8c0ixwBJ4AaABAg.9rRKEDkOEV39rTmYJdHCFt","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxIXzNQGwU6g--gsSB4AaABAg.9rRAa9OypQh9rVO2L4QnOz","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugz-SmocC08gAzk5kgp4AaABAg.9rR0f6HCIII9rTuvcLaWpp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzA8QT364rRklCbe8h4AaABAg.9rQzm8IReHZ9rSkNCQncq3","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzA8QT364rRklCbe8h4AaABAg.9rQzm8IReHZ9rTCTQ3Th9H","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]