Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The AI saying that humans are inferior are out of context, I got many answers it…
ytc_UgzFx88iz…
G
I used to support AI but it's getting insane AI actually needs to be banned for …
ytc_UgyWWgwqP…
G
Milton Friedman visited ex Soviet countries shortly after the USSRs collapse to …
ytc_UgwiZRRWJ…
G
Thank you for your comment! Sophia definitely provides some thought-provoking in…
ytr_UgzO7tFAd…
G
using ai for inspiration is fine but just using the ai generated image is not ar…
ytc_UgxfvVJW5…
G
It was a DNC account. But it was a deep fake. Not because of the eyes but becau…
ytc_UgwjCnSp4…
G
I was hoping he would actually be the AI in some sort of crazy plot twist…
ytc_UgxDtICSj…
G
> and now with shitty friends.
India has been friends with the US for a whil…
rdc_lua6tk6
Comment
The biggest thing about AI safety is knowing how the AI works. If this is going to be a public good that's basically monopolized, we should know what and how OpenAI filters out what it perceives as "Biased Content", and try and figure out how to actually make GPT beneficial.
youtube
AI Governance
2023-05-17T02:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzVfiIcMF1l_Je0u6d4AaABAg.9pn0Yb_I47C9ppacM3mqNx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxHCOVLfCDQFcNmSel4AaABAg.9pn-XVA1Hbg9pn9vFagUKZ","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgxHCOVLfCDQFcNmSel4AaABAg.9pn-XVA1Hbg9qgzFZv5FeR","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgwoySkl4MRsQkmBjSh4AaABAg.9pmxWp_fJv79pn3Z5njUd4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwoySkl4MRsQkmBjSh4AaABAg.9pmxWp_fJv79pnQEnO1f6a","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwoySkl4MRsQkmBjSh4AaABAg.9pmxWp_fJv79pyp3aHSeFa","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyfaLTHbc73yxfTcJB4AaABAg.9pmo6znuYqo9pnT6cY66hz","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgyfaLTHbc73yxfTcJB4AaABAg.9pmo6znuYqo9pokP2m-LWX","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugw24WZzQxwp8FGivEp4AaABAg.9pmku29CVQv9pmmYjVOUJl","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw24WZzQxwp8FGivEp4AaABAg.9pmku29CVQv9pmpoj3eEWQ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]