Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's why the AI, Running on top of the dataset should be dynamic enough to onl…
ytc_Ugza3ylnC…
G
As a Software dev. let me assure you that AI is going to take over art eventuall…
ytc_UgyBWXiIh…
G
Its Y2K all over again. I suspect there will be incredible things that will happ…
ytc_UgzhyhMkm…
G
I hate the fact that we don’t account for our consciousness and emotions when we…
ytc_UgxUcf3FS…
G
One robot gave side eye to the camera 😬 I refuse to believe that these are robot…
ytc_UgyRaXU4L…
G
So when we have a super intelligent AI, which is not going to be a tool and not …
ytc_UgzZPbL8z…
G
It seems you have a strong perspective on the meaning of names. In the video, So…
ytr_UgzQS1RUe…
G
a good way to tell if something is real or ai is look for consistent continuatio…
ytc_UgyoYZ_pw…
Comment
I don't remember where I heard this but I heard it 10 or 15 years ago and it's always stuck with me. What I heard was. By 2030 we would AI at the same level as the fictional AI in the terminator series with the ability to do the same thing. This prediction set aside. There are 2 things I REALLY do fear could actually happen or of all the apocalyptic type scenarios are the most likely to occur. 1. Is an asteroid wiping us out and helplessly watching the end. 2. AI becoming sentient/self aware, going rogue and escaping into the internet and destroying the world via any number of ways, launching all the nukes is the most obvious and popular way but there are several ways a rogue AI could cripple and destroy modern humanity.
youtube
AI Governance
2024-03-05T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw2BvxQS1ptiA4gyxB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgybEW-LV-nV_q19XlF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwKapiQShtwB4OCnO14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3zPKjxi4CZfPNPGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzwuoc0i_E-vSPSUnZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzog9kiFmFKYhbHd8J4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx_mnjlTqYA6ijB0794AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy77eLUm-lnHLwiqfR4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyMRZyklgM16HocSJ54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzMRKr7rlsDwla1Uqp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]