Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI fails, it will be economically devastating. If AI succeeds, it will be ev…
ytc_Ugw-oDd9I…
G
Are you telling me all those years of Shakespeare didn't teach me how to invest,…
ytc_UgwbtpRzR…
G
Facts. And even with all the fear around AI replacing jobs, there are ways peopl…
ytr_UgyWLD8i2…
G
You're not wrong. It's one of many large problems that is being kicked down the …
rdc_n5gk7wr
G
So chat GPT essentially filtered information .. .wow, ground breaking. So when …
ytc_UgwQjTZXD…
G
I don’t believe AI will replace classroom teachers. AI can’t make a connection a…
ytc_UgyDJWfbJ…
G
A.I: Cost trillions
A.I: It's not increasing productivity or making money, real …
ytc_Ugw2zaZui…
G
Wow! I have an IPhone and facial recognition doesn’t work for me. I didn’t under…
ytr_UgwJSUpIy…
Comment
The best way to minimise the negative impact on society is to tax AI. Distribute the wealth to everyone, reduce the wealth gap. It would decentivise to some degree the development of AI and optimise the development of AI to improve human flourishing and minimise human hardship.
youtube
AI Governance
2025-10-06T08:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyQp5G5HbRhrz4owJJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy9Y7zIhTJWG2fX3zx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzGkrjCBgkTEipjKlN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwc7JOeLUrxjvlgbjp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz0CpnH5PUDauvTjYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxyrGlHMpeg5UC_7Sl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwqHZ3sVpsgG8AjwF14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwZQ6DjQ7msGRnllTZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzFm3GvfTB3mqs5oM54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRPwD4E9QV_O9mMdZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]