Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m a visual artist who’s against AI for art… recently deleted my instagram and …
ytc_UgwCb_c4M…
G
Elon building a robot army to take over the world.
All hail our new emperor, El…
ytc_Ugx1iV5YJ…
G
If ur scared of the goverment or ai learn how to make emps u can look it up on y…
ytc_UgxUBNG9g…
G
Geezuz. This interviewer started off asking some decent questions. Then, came 'w…
ytc_UgxX1oUyy…
G
Well if BMO is the leader of the robot revolt, then all we have to do is plug hi…
ytc_UghmqeH7D…
G
>These people in congress right now might as well be cavemen trying to unders…
rdc_je3t89a
G
We'll just go back to how we lived previously, grow our own food and hunt. We ju…
ytc_UgwGQLOmI…
G
Feel sorry for the victims, but drivers at this point have the ultimate responsi…
ytc_UgxWWyBn6…
Comment
Computers have calculated results faster and more reliably then humans since the abacus. To believe that intelligence could emerge from aggregating the average rants of people on the www is like saying an abacus could invent a PC. Agi will not arrive, never. Software will improve, computers will become faster. One fact you state that has already come true is that AI would start to exceed the intelligence of humans. Clearly it has already surpassed the intelligence of those that can't figure out that for a system to output higher intelligence you need to input higher intelligence. Job losses are coming. Better software is coming. Safer systems are coming. More automation is coming. AGI is not comming. AI that exceeds the intelligence of it's sources is not comming. Intelligence does not add up. If you add 2 people with an IQ of 50 you do not get an IQ of 100. Same if you add a billion stupid people's brain farts together for your ai to train on the most intelligent it can get is the average intelligence of the sample data.
youtube
AI Governance
2023-10-31T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzNkXg5fpJpeUimq8t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwcVdy2m8VeYZQw_bd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5Jryl9H2IMOF3tbB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzF_Eq3ZEQf8CZdkb54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5TomIZ68iXgjVldp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyNiI3xyKe5jEVQK9t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzBTl4y3-nmZ9pWYRN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgymMjrw2A6vdouKczh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugwrdn4OOJnuZ5gDZWJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzGqNeYT7sqeqLTe4N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]