Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We shouldn’t be doomed, because even from the trailers, the audience can sense a…
ytc_Ugxtzs-g-…
G
What are we doing!?!?! Why are we racing towards AI?? All AI will do is cause pe…
ytc_Ugw3ibEna…
G
There will never be another world war because there are atomic weapons. It's tha…
rdc_cfkvtn8
G
AI is EVERYWHERE without our consent. It is automatically included in "upgraded"…
ytc_Ugyd14ylk…
G
I was just sharing my opinion, not trying to have a big semantic debate. I just …
rdc_o8bnzma
G
Personally, I used to focus on the Capitalisation and Destructive Effects of AI-…
ytc_UgxTginrQ…
G
Let’s not forget about the absurd water and electricity usage AI requires. I muc…
ytc_Ugx2qmEZB…
G
Real time projects are different,they are dynamic,robust and lot of thinking is …
ytc_UgzoAzsXl…
Comment
I work on the computer side of this debate (I don't support training AI without full permission from all sources, to be clear!), and people have wayyyyyyy too much faith in AI. It's very impressive, but at its core, it's fancy math. I don't trust AI any further than I'd trust statistical analysis - it might make a good weather forecast, but I don't expect it to write any more than a semi-decent outline for a paper.
The good news is that AI is poisoning itself too. People keep posting tons and tons of AI-generated content, and that creates a feedback loop when companies scrape from where they output. Major companies are already running out of "useful" data, and even optimistic Silicon Valley tech bros are saying we might run out of quality data by 2026. So unless companies figure out some tricks fast, this boom is unsustainable.
youtube
Viral AI Reaction
2024-11-04T20:1…
♥ 72
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugw3GSp02it1W1pfs6R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugw3Ty8sGVS9gH3Dg4N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytc_UgynXdMD7kSuFpwbAGx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugx1lnuBPgzCj4bHaZZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgysO6VYLR0--iO1sSJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgypTEZbf82FF85CyBF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwXU3VFUBQ7mAxQYgN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugw4JFkgARfUqr73Krt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgyURHwpBq3qo6WBusl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},{"id":"ytc_UgwrBpg0ovLOOWYmZz14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}]