Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Learning it’s not economically sustainable just made my day, thank you. Been an …
ytc_Ugy_aZBd-…
G
Yes it did and if we are not careful about what we are doing AI will take away m…
ytc_Ugy53xMLB…
G
@noname7271 it might be far fetched, but it's evident that AI has enough data to…
ytr_UgznWWQ3n…
G
A sentient AI will either shut the power grid or destroy itself knowing it’s the…
ytr_UgwP8lK31…
G
So Sam Altman talks like Chatgpt... in other words Chatgpt is actually Sam Altma…
ytc_UgyKYCh43…
G
Hey, @kurzgesagt, wanna do an update on this and put it in the context of the AI…
ytc_UgywjNPl9…
G
People choose ai over jobs. Maybe ai is here to save us because we are stupid.…
ytc_Ugw3OUxdR…
G
That’s how it’s already done. At our studio we do all the brainstorming with our…
rdc_kzlta1d
Comment
Hii there Elon and Tucker! Beautiful interview. As usual, when it's about Radical Innovation, I always advice that consideration be given to the Capitalisation and Destructive Effects of the Radical Innovation in order to fix (reduce) the Destructive Effects and improve the Capitalisation Effects.
Making the same with AI as well.
II. Regulation of AI, it's a good idea but it will be very hard to have that in a near future. However, agree that a constructive collaborative and cooperative framework is needed on the issue.
youtube
AI Governance
2023-04-18T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxorcJT8XwqCWgpvEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7w15RlynwnrTsuYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxRiT59zXJKa7BlvD14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxLJWbB5bdAXB0CkXV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyWS7TouVb8-OpV1ep4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzS3ZC1ffAnq8uhiFJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwPHku-3XxXYeUVLu94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzTkOYpTAAFQqHFUUF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzWHQ-xs_yR0NawdoN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxK1Z0MW7rCk0qjthB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}
]