Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Geoffrey, I appreciate you being the godfather of AI and all, but vc é soça pra …
ytc_Ugzd6wYVk…
G
I mostly write and my drawing are not to the level I want tobillustrate my stori…
ytc_Ugy02Nzrm…
G
Do whatever you may, but AI will go beyong human capability and will become self…
ytc_UgwolBcTU…
G
What a human Geoffrey Hinton is, you can see throughout the interview that his i…
ytc_UgzqwIBTe…
G
The thing about “A.I” is mostly use to scare people in different industries. Art…
ytc_Ugx4UbJRv…
G
Respectfully, you have no idea what you're talking about, it's not nearly as imp…
ytc_UgyXCJ5Ee…
G
Robots don't pay taxes, don't contribute to the economy, and don't replace the r…
ytc_UgwbOiA2f…
G
Oh great, can AI tech make it so we can completely block out known scammers that…
ytc_UgwDg2K9c…
Comment
I think he’s right that “the world is stranger than we expect” *would indeed* be a conclusion, in hindsight, that follows from our survival. BUT, in foresight, this is only reasoning that works *under the assumption of a world that has our interests in favor*. In other words, he’s not taking neutral grounds in the argument: “AI could be dangerous, or AI could not be dangerous”. He’s conversing under an assumed premise — that it’s not possible for it to be dangerous. If this can be acknowledged and communicated it could be helpful, or at least useful as an argument
I’ve come to learn that these things result from a virtually unsurpassable barrier of worldview. It would take some highly contrived deconstructions to establish mutual understanding and comprehension, of which would likely need to be individually tailored to said individual’s specific worldview. This is a very difficult problem which pervades many controversies in society, especially because it’s so underacknowledged.
youtube
2025-11-22T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyLt72yzaSZcysuV6t4AaABAg.APwTAb5lMXOAPxGosJoPMJ","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugzh-ft4yAvSqIhEN-14AaABAg.APnZ-RGttVuAPngNRbNVjI","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_Ugz39BpT_9T5d-Ba3xx4AaABAg.APnLRBNerHsAPnUVkqTBOM","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugyz6Ms8OUTnMzslEmN4AaABAg.APnJTRZi3UFAPoPA_rjhwm","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugyz6Ms8OUTnMzslEmN4AaABAg.APnJTRZi3UFAPpasnLv2TR","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwT7kNtEnbroo-TmBN4AaABAg.APnGSDtGtwKAPnUtLbFmHG","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgwT7kNtEnbroo-TmBN4AaABAg.APnGSDtGtwKAPnigSSaZIq","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytr_UgyrXQgmuXKHR-qkbLp4AaABAg.APn1wG7PpzpAPns05Io1Ze","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwZtlYjAycs5EqT-l94AaABAg.APmwguyhLamAPpBVTZYLL3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyLpLxuZqp_nffct6J4AaABAg.APmYrS9e-fYAPnVfM9IC4k","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]