Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@lexiclarity I mean, it can teach itself not to copy, or anything else for that…
ytr_UgwsiceYS…
G
Uhhh i don't know if you've heard, but Dr. Donald Trump, patron saint of Brawndo…
rdc_esp0sn6
G
Universal ownership of the automated production would make the people more indep…
ytr_UgylhsPxm…
G
Self driving cars are so far off. Elon is a con man. Do not trust self driving t…
ytc_UgyRSVtvY…
G
Yes but all this is not related to just AI, this is the residual effects from th…
ytc_UgwB8pv9g…
G
I Swear it’s literally just me a butters playing halo in his bed room while I ea…
ytc_UgxYnrwF9…
G
Very interesting, I listened carefully, and I am convinced that the professor de…
ytc_Ugy3wZpKg…
G
Ai is a human programng, if its killed that means its been programmed but a huma…
ytc_Ugz0OziEg…
Comment
There is at least one overlap of muscles and office work and one of those areas is Time-and-motion assessment.
We all know that it takes a finite amount of time to carry out a task, but some of us may not realise that how long it takes is of great importance to many companies.
A fencing firm needs to have an educated idea of how long to install certain types of fences in order to make a quote for their customers.
An auto repair firm is even more precise! Every task has a listed time to do; how long to change the nearside bulb in a c-reg classic mini will be different to a modern mazda, for instance.
But what if a computer was given control over employee performance?
Would it be sympathetic to times taking longer but for genuinely good reasons?
It's a smaller point in the grand scheme of things, I know, but this demonstrates that if AI were to take a bigger element of control, there would need to be a way to "argue" a point or allowances made for justified changes that require an AI judgement to be reviewed.
youtube
Cross-Cultural
2026-03-02T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxegjOB7AnIgd6vZrh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw04sY0AGXpr2gL-1x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwCMuFHxnefT72G4EN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3wNjBstTrF2Z-WaZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwLdz6zJPrMl45H8sp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7elvi1KW2JHFu71d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzu7xeyjNuO5qKLerh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgygA8za1ZMnNiVOthJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyleabVHhJBd1hfjsx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzrTqrPtUF4PeCovxl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"resignation"}
]