Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah except humans literally have COLLEGES and YEAR LONG COURSES for that shit b…
ytr_Ugw9eM6Pz…
G
All Big Law Firms of the world including India are either using softwares with b…
ytc_Ugykfxamt…
G
Its all jokes and games until the ai shit causes a mental health spiral and or c…
ytc_UgwuUEB9M…
G
What if AI goes fully open-source and wipes out the entire logic of big companie…
ytc_UgwNE5foT…
G
@bymegangrantI admit, my grammar isn’t the best, it’s my weakness. But …
ytr_UgyaNZ-d3…
G
AI is already in computers can you imagine how much information it can take anyt…
ytc_UgwX3968z…
G
Some of this is absolutely real and horrifying. Humans like to believe they are …
ytc_UgwptNtn-…
G
As far as consciousness goes, I feel we ask those questions because 1. We hardly…
ytr_UgymFehxV…
Comment
I think Ezras questions and the rabbit holes he pushed Eliezer to go down weren't optimal, a lot of time was spent discussing the ways it might not be bad (but did you consider this? or this other thing? for like 45mins).
The thesis here isn't complicated. Smarter things tend to control dumber things, and AI is on path to be smarter than all of us. It's only nerds who need to armchair debate this. Most people are afraid of what AI will become, this polling has been done. We should talk about how to get gov buy in and what regulations we need now, not make up the possible doomsday scenario we will find ourselves in. I accept a priori we will land in one, now lets stop it. How? Talk about that more
youtube
AI Governance
2025-10-15T21:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy1egUSacGBMPQ4BKV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwetfDxOVuH1vuc9RB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwVP425hKUyxaCzdmh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyI5DPCta6duLKzyr54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPaYcIVSzYEPNHe6p4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx0rZRZ9c7FMXTPdsd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzdQEWkOF26mJP8NEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw_vKorzBDdq9QHXlJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzbaaIIZiBMusgDrkl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugy4LxW4IjIcZHmjdhR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]