Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
well... i can live with this...chatgpt has been a good therapist, pulled me out …
ytc_UgxtCNg9e…
G
This is just the ultimate justice, boomers and Gen Xers. You kids go too slow. I…
ytc_UgxnazXC_…
G
26 min this is the important point. Ai would see that in 0.0001 of a second whil…
ytc_UgxBhWSpw…
G
What the fuck are you on about "you're to scared to state your opinion" bro... i…
ytc_Ugwf_mYWz…
G
Think:
If companies manage to get AI that replaces humans in each of the existi…
ytc_Ugx4txTp5…
G
Time for me to figure out which companies use these driverless trucks, and never…
ytc_UgywHOJEJ…
G
Reminds of the incident where the USAF ran 2 simulations with an AI-controlled d…
ytc_Ugx2Q73U-…
G
Hello, is it enough to watch a few hours of videos on YouTube for this level of …
ytr_UgwBO8e5h…
Comment
Cynthia nails it, listen to her. She had her head on straight and makes sure to keep a foot grounded in reality. Chris also seemed reasonable. Nate is a fool, disregard him entirely. Kate Crawford/Latanya Sweeney are meh, primarily focused on philosophy (*BORING* alert!). Eric Schmidt says everything I would expect him to say, most of which I agree with. Neil Degrasse Tyson has become mostly a buffoon/host/personality, which is fine, but be mindful of that when he slips in his own comments and ideas between asking the questions to the guests (Read: Disregard his comments and ideas).
I will leave a couple of general points/rules of thumb on the topic:
- *If people try to scare you and the best argument they can come up with is entirely contingent on something imaginary, it's probably not worth getting scared* (Doesn't this sound so obvious when stated aloud?).
- I notice a lot of AI safety advocates misuse the term "emergent behaviors," and the model is actually just doing exactly what it was created to do in most situations where they apply the term (this is exactly what happened when Nate used the term in this debate about Claude, and when Eric used the term when speaking about LLM's and "deep reasoning" being emergent). *Make sure you take a moment to interrogate each situation where you hear this phrase being applied.*
- Continuing from the last point and as Cindy hit on: *interpretability is a human limitation, not machine intent.* Try to not get sucked into hype even if an AI model does something you didn't expect it to do. That moment of surprise is when you are most vulnerable to misinterpreting what happened and adopting an extreme view when all the model did in reality was exactly what it was designed to do based on the input parameters.
- Job loss/displacement due to technology isn't inherently bad, so if you hear it framed that way it is an immediate credibility damager to whoever you heard it from. It doesn't mean it isn't painful, but it is not inherently bad. It is also usually grossly exaggerated (lol, Nate with the "I expect 100% unemployment"), when a more reasonable forecast would be jobs changing rather than being deleted en masse. Usually arguments like these out people as dishonest interlocutors more than they should sway you in one way or another in regards to AI.
youtube
AI Governance
2026-04-01T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz-Gh575HbOzr5TCyV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3351DUgmv987PLvl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2T8cpHtuN0jzRNld4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyIL8Gdd4Uy6eozncl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzz4O64NSipWvcE30x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcUD7xtnibYbERoJl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyTIfl5IxKBmrKsdvx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy2JH4lJ93V3N9SGzN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxUYpNKS3rya3Fk3AR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwVo_N7raO02SP46MZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}
]