Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
at 4:11 "in machine learning lingo the, difference in fits between data sets is …
ytc_UgxagiMts…
G
Well, at least she has a handle on the safest, most mundane and tractable proble…
ytc_UgzEc1dPj…
G
I have many times heard people speak on subjects they had no clue about, yet the…
ytc_UgzWhW2Ix…
G
Let's be honest with ourselves, we have spent so much time and effort to prove t…
ytc_UgwU28j6o…
G
@oess855 ask midjourney to generate a jono dry styled drawing,if it cant,then hu…
ytr_UgxHcRjSd…
G
Did you ever disclose that this wasn't actually ChatGPT you were conversing with…
ytc_UgwdZyTrM…
G
Good. Because AI will unalive all the billionaires too. You cant hide in a bunke…
ytc_Ugz-XSAdp…
G
tbh thats the reason i don't like ai art aswell it just doesn't carry the wonder…
ytc_UgxRI37Oh…
Comment
While it's true that the data centers are emitting a ton of CO2...what are the chances that a MUCH smarter AI than us will solve that problem? High. At it's core it's a chemistry problem which would, baring new discoveries, requires energy to fix. Energy we don't have or rather don't want to pay for. Short term...sure it will make it worse. But think about status quo: Fewer new ways to get new energy sources to fix it nor as many new ways to reduce that energy cost. It's like arguing that taking a loan doesn't make sense financially. But it DOES if you make more profit from it than the interest. It's a bet for sure...but it's likely a good one. It's just hard to factor in things that MIGHT happen when you don't know about them. AKA there is less risk with AI than without for that issue.
youtube
AI Governance
2026-04-06T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwZyB8kadifiA5VOKV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuXF-ZWdfxIGTZpqB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxaIa4fEwEzdI2s9zd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx43kDpw1e7XXqgyMt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzj9PJuBEeifooeqFt4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyrdosCanqs7tT0YON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy1gpYcngcXLWCMFP94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwxtRtaQzrEzEbkwAh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzBCZUWleatLTwB9HN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwtpYsUBVrfpiZnz8N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]