Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Positive note: Life after Ai must be pleasant with endless technology and probab…
ytc_Ugzdir0m0…
G
I think its funny how easy it is to find people defending generative AI online d…
ytc_UgzbwUuyT…
G
with facial recognition and drones and cameras everywhere, it will be harder for…
ytc_UgxxJiPTY…
G
I just don't like that Google shows ai images (bad art and quality) when you do…
ytc_Ugy5clkSu…
G
AI is plundering small creators by taking their content without consent and not …
ytc_UgxUzrDS_…
G
So he said hes been working with ai for 30 years and 30 years ago is when termin…
ytc_UgxCZ69Dl…
G
haha what a joke this dude and what is that scary uniform of him. AI is just a d…
ytc_Ugxzjf2X2…
G
I understand your point when reading it. But I think an interesting research stu…
rdc_jdl80cu
Comment
Meanwhile Google used its "Gemini AI" to smear Conservative politicians and content creators by fabricating lies about them; implying they are p3dophiles or engaged in other criminal activities when there was nothing of the sort. They were also caught blackwashing historic figures (even the founding fathers of the US) and completely eliminating straight white men with any imagery shown depicting minorities (which was rather hilarious when we suddenly were faced with black people in Nazi uniforms).
So it seems Google doesn't care one bit about AGI ethics and safety; only on how AI may benefit Google's political agenda and power. Only after they got caught redhanded did they claim to do better but I suspect they will only try to hide it better.
edit: If the world will one day be destroyed by AI; we can be almost certain that either Google or Microsoft will be the cause of humanity going extinct.
youtube
AI Governance
2024-03-02T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxD2aHzdcmX_pvWSbZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw8iXRUXDWSfZPnaHJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx72F4PC8z_4lnFLAx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxbyR7GQIOS3bKGCqx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz24bCTlC-ruqvX_N14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKEUedxDI4mG5lBOx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyVZzU9a5nINBW7lqp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQZKf1v6ckoe4oldJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxtUn4q_-kyLSgm2GZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYKAsGUZwbEZnECbF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]