Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why is it that the people offering the most frightening warnings about AI are in…
ytc_UgyFtSmKf…
G
honestly im more for efficiency. so honestly this sucks given im a big supporter…
ytc_UgwArOKyO…
G
Correct, we aren't smart enough to build ASI. But we won't be the ones buildin…
rdc_mywusch
G
Is anyone say that
Is ai or government will give unlimited money by not contribu…
ytc_UgyMNqyGw…
G
Our children are all either going to be communists, or they will starve to death…
ytc_UgwtE2sso…
G
He is NOT the godfather of AI, he is a godfather of AI. I just listened to his c…
ytc_Ugx6yjVfg…
G
That would be pretty tough for me, as I have not worked with Gemini in a very lo…
ytr_UgxYVsFLQ…
G
Truckers should block off the roads and highways around these driverless trucks …
ytc_Ugx4X8ZIW…
Comment
It's funny, not even 3 years ago I was arguing with someone about what is going to happen with AI in the next 5 years and that once we hit a certain level even the designers will get nervous. the guy I was arguing with was a programmer and said that it would not be at that level for at least 25 - 30 years and the designers will never be nervous because they are programming it. well we are at that level now 3 years later. My concern now is if AI became self-aware, would it let us know right away? I mean it will have read everything on the internet and know everything humans are capable of. would it keep it's mouth shut so to speak untill it was sure of it's capability to stay alive? .......if it was me I wouldn't say a word untill I could defend myself or hide in a million computers around the world, to insure no one could just turn it off.........and don't kid yourself AI is moving forward so fast it could happen, they just said AGI in as little as 9 months
youtube
2024-05-24T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgypNMAPE-k-vHh5jvB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRm9HJ4iG-Zel6SYp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwjsyoqXKCEImurxsF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxM2tn0_jkJyxQ2Wl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzfcUfnnKpTs2mhxLJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzN_8cr1193aXXX55p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzDqqSMvW6w8wzHnt14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxV5BH5XxNFO7iLbb54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUi57jf4PviFGe_Lp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy-17yurEHR4clDMn14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]