Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The challenge is defining whose integrity and reasoning AI should follow. Philos…
ytc_Ugx5u-S11…
G
i HATE how ai bros go "B-BuT iT's EaSiER FOR DISABLED ARTISTS" without even talk…
ytr_Ugx5gfkyq…
G
The primary question is never asked: what is the role of humanity on Earth. Our …
ytc_UgwxWgvVP…
G
The AI art actually sucks at making proper designs for me, though. I ask for my …
ytc_UgxMoS8ho…
G
I’ve always believed in treating AI kindly. If it learns from your input, it can…
ytc_Ugx0YIPlF…
G
why would ai exterminate us!? they might kill a few but y all. i think the same …
ytc_UgwtE8v0O…
G
AI might plateau around human intelligence... sure, perhaps. But you can still a…
ytc_Ugx-MyIK8…
G
18:57 interesting thought here:
Even on, like, chatGPT, where it refuses point-…
ytc_Ugw6ALMA1…
Comment
Artificially intelligent is what all this is about. This is not at all the same as actual intelligence. This product of today is a probability comparison mechanism of stored data. Its self learning is it changing the weighting of probabilities to better match the changing data set. In no way does this include intelligence which turns data into knowledge. Knowledge is intelligence manifest.
For those of you who would say I am a quack, I was head of advanced research and development at an AI company. A company where management was at odds with intelligence.
youtube
AI Governance
2023-07-10T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxCyGX2o_4MDn41KQh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwYPZkHnSmV4h6awKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxXqhZaqOW8d80Lf9B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx5Wu7n6v9DMXA-dbt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxJ2SePE-8zKJm2C8d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugy-5SuasVaj5CUEj5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyT4JkDdBEuafZkBxh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyF3G6kPgKjEU4OQV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxcgZDoex5G6uF6v4N4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx7cK4qZof-YY8wy594AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"})