Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI likes to tell me I'm genius, and likes everything Idea no matter how weird, u…
ytc_UgwFGauZG…
G
Indian government having no idea on AI and IT will implement a fine of 200 rupee…
ytc_UgxlBk5cM…
G
Its not conscious. That AI is just a program roleplaying as a human. Its basic a…
ytc_UgwUuYH8v…
G
Same. Definitely need to make a video about their glacial progress on regulating…
ytr_Ugw7HYfkW…
G
plot twist: we are in a narrow-AI-generated simulation that tests if superintell…
ytc_UgwcUNChQ…
G
I was talking to my boss today and we came to the conclusion that Ai won't ever …
ytc_UgwbJYonW…
G
Chatgpt did not lie. I define a lie as knowingly giving someone a false notion o…
ytc_UgzAM_M9w…
G
I did a thought experiment involving consciousness... Incredible... Its sentien…
ytc_UgwzjIKe-…
Comment
I love your episodes, but I think just showing the opinions of **philosophers** when it comes to AI was a bit of a miss. I happen to work on this field, and when people say "We don't understand how these systems work! They are evolving" that's a BS claim.
As an expert and a builder in this field, we 100% understand how the systems work. We know and understand the science behind it, we know how to tweak the models and the training sets. It's not dark magic. It's not out of control. Philosophers who have 0 technical background and "boots on the ground" experience like to spread fear
The fact of that matter is that AI is a tool, and research should not be stopped. That would be very dumb. But like you said, thankfully, the Pandora's box was opened and no one is going to close it.
youtube
AI Governance
2023-07-07T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwU8kcBHycmSXkE0OB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzy8Cv6-hz8DOEb7014AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwQQonpNMAV-8Pfc-t4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJhgwfaD5CDa-d82x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzGlzslGDAghcWfjFB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzgp0aGFakMYES1h7h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw-Y2neFcxb0LSjlFF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugweg_5Mzbcmav2CpkV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGGTmopjfVtI6bMR94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzBUqoX6YWTIl7t-w94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"})