PDA

View Full Version : Running an immortal intellect by committee



NichG
2014-06-01, 08:54 AM
So it occurred to me that there's a correspondence between the mathematics that governs the persistence of genetic information across generations under mutation and the general requirements necessary to cause any sort of information to persist for long times. Basically, replication causes copies of information to be formed faster than they degrade, and there's a sharp transition between retaining genetic state and eventually losing it to mutation, called the Eigen error threshold.

But that sort of principle can apply to other things than genetic information. Take something like oral histories - if each holder of the information tells a sufficient number of people, then the collective knowledge of that history will persist against random drift - essentially the errors average out. Of course, where it fails is in the presence of systematic error, e.g. someone telling many people copies of the information that are all incorrect in the same way. With things like written and digital records, a standard of equivalence is created to reduce or remove the effect of systematic error - the copier can look at the copy and verify that it is equivalent to the original; furthermore, if the copier is careless and doesn't double-check with 100% accuracy then that error will still average out. That suggests to me that one could do the same process with other sources of information as well. The necessary components to pull it off for any kind of information are to have an information replication mechanism, and some sort of bias control mechanism that makes it so that the individual replicates are statistically independent from each-other.

So I'm curious if this could be done to create a persistent artificial intellect. Basically, take the sort of exercise anyone does when roleplaying - they try to emulate a particular character in their head, and choose actions consistent with that emulation. Now, if you have 50 people trying to play the same character, each rendition will be wildly different - in the genetic analogy, this means that the mutation rate is through the roof so you need tons and tons of people to get past the error threshold. However, crowd-sourcing the mental computation could actually give you numbers like that. You won't preserve a perfect copy, since its going to be hard to do the cross-checking to verify equivalence between different people's portrayal of the character, but you can have a sort of introspective process by having people separately submit acts/thoughts and vote to confirm them.

The idea would basically be to create a website, lets call it 'Bob', which contains certain demographic information about a fictitious person. The site displays a few sentences to each visitor which comprise 'what Bob was most recently thinking about' and 'what Bob most recently talked about/heard'. The site then displays a line of conversation from a specific person who wants to 'talk to' Bob, something that can be emailed or submitted to the site and is filtered automatically to try to remove most of the spam and bias selection towards people who Bob has talked to before. Each user submits two sentences - what Bob says, and what Bob thinks. Many users (50-100) are queried in order to provide a set of responses. Then, users may vote on the appropriateness of the other submitted responses by assigning them 'I like this' or 'I don't like this' sorts of ratings (this may need to be staggered somehow to prevent sources of sampling bias due to the expanding list of options). Users who have a high rating in the past have more weight added to their votes. After enough responses and votes have been accumulated, Bob sends its response back to the person it was talking to. In order to get more interest, such conversations should probably be made public on the site as well, but Bob's 'thought process' (the hidden sentences) should be limited to respondents only.

Occasionally, to flesh out the Bob persona, there could be scheduled events that engage with other forms of intellect than conversation - 'Bob draws a picture', 'Bob composes a song', 'Bob plays Zork', etc. To really make it long-lived, the choice to do those activities would somehow have to be generated from the community behavior (anyone can propose such a project, a sufficient number of votes on the site gives them an automatic, temporary 1 day access to submit a custom webpage enabling the Bob algorithm for that particular project). Additionally, some sort of classifier could be used that allows users of the site to pick certain threads of conversation where they're most comfortable playing Bob ('Bob talks about sports', 'Bob gives advice', etc).

Anyhow, its kind of a crazy idea, but I'm curious how something like this would end up working. The real challenge would probably be to maintain interest - given the lifespan of internet fads, Bob would be lucky to make it to 3 years of age without some kind of gimmick to keep Bob relevant.

If nothing else, this seems like it'd make an interesting art exhibit.

TandemChelipeds
2014-06-01, 11:51 AM
Aren't you basically describing fanfiction?

The Grue
2014-06-01, 11:59 AM
...Users who have a high rating in the past have more weight added to their votes.

This is extremely abusable. On forums that use "rep" or "karma" systems where users of high rep/karam scores have an increased ability to give rep/karma to other users, this leads to a sort of "Circlejerk Effect". Giving users who recieve high ratings more weight to rate others will, over a long enough period of time, result in the emergence of a "rating cartel", a comparatively small cross-section of the userbase with a disproportionately large ability to influence the rating system. This is an emergent feature of the community rather than any conscious decision to band together; by giving some users more weight, you introduce a selection bias towards the views and opinions of those users, who will tend to vote for options that reflect their own views and opinions.

To use a more transparent analogy, it's a "rich get richer" kind of setup.

NichG
2014-06-01, 01:21 PM
Aren't you basically describing fanfiction?

Well, fanfiction which is interactive and which uses a shared character, feedback, and other such mechanisms I guess? Essentially the idea is to use the brains of the userbase as emulators of a sort, where the 'program' is given by the shared canon generated by the feedback mechanisms implemented in an automatic fashion on the site.


This is extremely abusable. On forums that use "rep" or "karma" systems where users of high rep/karam scores have an increased ability to give rep/karma to other users, this leads to a sort of "Circlejerk Effect". Giving users who recieve high ratings more weight to rate others will, over a long enough period of time, result in the emergence of a "rating cartel", a comparatively small cross-section of the userbase with a disproportionately large ability to influence the rating system. This is an emergent feature of the community rather than any conscious decision to band together; by giving some users more weight, you introduce a selection bias towards the views and opinions of those users, who will tend to vote for options that reflect their own views and opinions.

To use a more transparent analogy, it's a "rich get richer" kind of setup.

This is intentional actually. The design is basically the same as a bistable switch. Bob doesn't have a personality at first, until essentially a fixed standard emerges and is locked in by the feedback mechanism. Its similar to the fixation of a particular genetic sequence in a population - as the subpopulation grows, it grows faster, and eventually dominates in a run-away fashion.

So essentially, without a mechanism like that the personality of 'Bob' would tend to lack persistence.

One could even do fancier designs than this. IBM Watson works by having a bunch of modules each of which has the ability to estimate its own accuracy for a given question or situation - the result is that the performance jumped from something like 60% to 90%. One could similarly do behind-the-scenes clustering on the sorts of interactions Bob has, and figure out on a finer-grained level not just which users tend to have their suggestions accepted, but even for a specific interaction it could estimate the probability that each user would have their suggestion accepted. So one user ends up having a good success rate on Bob's chit-chat interactions, while another user has a good success rate on Bob's philosophical discussions.

warty goblin
2014-06-01, 11:57 PM
Isn't this basically a really complicated board of directors?

NichG
2014-06-02, 06:41 AM
Isn't this basically a really complicated board of directors?

The question is whether one can build a system where the personality of Bob is preserved even when the individual members of the community come and go. I'm not claiming that there aren't other things in the world which preserve information - noticing such things is basically what led to this idea. Generally, however, the sorts of information that gets preserved by social dynamics tends to be more abstract: stuff like oral histories, cultural values and beliefs, etc - not a specific persona.

So the question is, can someone build an immortal 'person' who is effectively the same person century to century, even if the individual brains comprising them are constantly changing and being replaced. Essentially, the point is to try to characterize the retention and transmission of the core information that makes a person who they are and to behave in a consistent, person-like fashion.

The decoherence time of parts of the resultant personality can probably be measured by text analysis methods and modern data mining tools (things like word-choice frequency, emotional content of Bob's responses, etc), and its going to vary depending on how the site is set up and things like that, so in principle this can also tell you what sorts of information channels provide the most effective way of communicating core aspects of identity. So its a nice testbed for experimentation as well as just sort of an interesting thing.

Grinner
2014-06-02, 06:57 AM
Errr....That sounds awfully close to Cleverbot (http://www.cleverbot.com/).

It works by asking the user questions and observing the user's feedback. After doing this enough times, it can eventually work out the "proper" response to any given input. In theory, at least. In reality, it's a disjointed, amusing wreck.

I do like the idea of using humans as part of a multi-agent system, but keep in mind that the individual components of the system can be...well, uncooperative.

NichG
2014-06-02, 07:47 AM
I guess it depends how Cleverbot works. The site is running very slow for me, but it looks sort of like a more sophisticated version of the old Markov-based bots, creating an expert system to select/combine pre-existing responses in order to try to produce answers statistically consistent with human ones. Part of the examples where it gets really screwed up are where it seems to be taking things said to it as if they are equally valid as things for it to say to others (e.g. it calls users a website and things like that).

That's not really the particular thing I'm getting at here though. I mean, I'm not trying to make an AI here where the machine emulates a human. Rather, its 100% human in the actual implementation and there's never an explicit machine generation of the interaction stream - its an AI built out of humans, basically. So things like calling the user a website because it doesn't understand the difference between itself and the person it's talking to shouldn't happen accidentally - it would take something like 50-100 people agreeing that it makes sense for Bob to say that.

Hubert
2014-06-02, 09:44 AM
I guess it depends how Cleverbot works. The site is running very slow for me, but it looks sort of like a more sophisticated version of the old Markov-based bots, creating an expert system to select/combine pre-existing responses in order to try to produce answers statistically consistent with human ones. Part of the examples where it gets really screwed up are where it seems to be taking things said to it as if they are equally valid as things for it to say to others (e.g. it calls users a website and things like that).

That's not really the particular thing I'm getting at here though. I mean, I'm not trying to make an AI here where the machine emulates a human. Rather, its 100% human in the actual implementation and there's never an explicit machine generation of the interaction stream - its an AI built out of humans, basically. So things like calling the user a website because it doesn't understand the difference between itself and the person it's talking to shouldn't happen accidentally - it would take something like 50-100 people agreeing that it makes sense for Bob to say that.

Your system and Cleverbot are still very similar: they use human-generated input, with an algorithm to determine which piece of information will be the answer to a given request. "Producing answers statistically consistent with human ones" is basically voting on which is the best answer to give. Your system will probably have less attention deficit problems than Cleverbot though :smalltongue:

NichG
2014-06-02, 10:44 AM
Your system and Cleverbot are still very similar: they use human-generated input, with an algorithm to determine which piece of information will be the answer to a given request. "Producing answers statistically consistent with human ones" is basically voting on which is the best answer to give. Your system will probably have less attention deficit problems than Cleverbot though :smalltongue:

How is selection via human voting the same as statistical consistency? Generally the problem with statistical consistency stuff (e.g. hidden Markov models, Bayes classifiers, etc) is that you can't actually sample the entire problem space with any sort of reasonable coverage, because that requires that the bot had the exact same conversation that its having now at some point in the past.

So what people do is make simplifying assumptions, e.g. making some kind of projection from the actual conversation down to a more reasonable number of degrees of freedom (e.g. rather than looking at the entire sentence, choose a subset of rarely used words as markers and rather than looking at their grammatical context you just use whether they're present or absent). That's likely the sort of thing Cleverbot uses, considering they say on the site 'you're never talking directly to another user'. Any projection like that which you make will introduce weird, un-humanlike behavior (and not making a projection isn't an option because basically its just equivalent to having a human answer directly - e.g. the so-called Chinese Room paradox).

So I do think the results would actually be qualitatively different, in the same way that us having this conversation is qualitatively different than me having a conversation with a bot which has performed a statistical analysis of your posts and generates statistically consistent output within the limits of available sampling data.

I think the way to see that is, imagine if the voting step were omitted - then 'Bob' is basically just something like Stackexchange, where anyone can post 'the response' to a question or query. Now with the voting system, certain answers are brought to the top of the stack in relevancy; if one were to just take the top answer of a Stackexchange query, that becomes close to the thing I'm describing. The extra thing that makes it different is the channels of feedback presented to people who are providing the responses - basically imagine if when you went to something like Stackexchange to answer a question, you also received specific annotations and instructions that you were to follow that were invisible to the rest of the world, and those annotations and instructions themselves came from the question/answer process. E.g. 'answer this question as if you were currently sad' or 'answer this question, but keep in mind the following thing while you do'.

For Stackexchange that'd be a hindrance, but for something like Bob the focus is less on providing factually correct information and more on providing conversation and self-directed action that is consistent with an emergent personality. Since every step of the thing is human-driven, there's no reason that Bob couldn't paint a picture, compose music, or even play a (turn-based) video game or tabletop campaign.

gomipile
2014-06-02, 11:25 AM
When I clicked this, I misread "immortal" as "immoral." Which, I suppose, also describes many systems run by committee.

Grinner
2014-06-02, 11:35 AM
For Stackexchange that'd be a hindrance, but for something like Bob the focus is less on providing factually correct information and more on providing conversation and self-directed action that is consistent with an emergent personality. Since every step of the thing is human-driven, there's no reason that Bob couldn't paint a picture, compose music, or even play a (turn-based) video game or tabletop campaign.

...Or play Pokemon! (https://en.wikipedia.org/wiki/Twitch_Plays_Pok%C3%A9mon)

Only took them five times longer than me.

NichG
2014-06-02, 12:23 PM
...Or play Pokemon! (https://en.wikipedia.org/wiki/Twitch_Plays_Pok%C3%A9mon)

Only took them five times longer than me.

That's awesome! And only a few months ago too. There's some interesting data here.

One thing is that without a voting filter, the gameplay was incredibly dysfunctional, which is consistent with what I'd expect for Bob. The really interesting (and perhaps worrisome thing) is that people seemed to really hate the voting filter despite that. Which means that getting people to accept and actually like the voting aspects would be a significant challenge. It may come down to how the project is initially presented and to what degree the voting system can be integrated as a sort of 'game' for the people running Bob to play.

It also suggests that the people who really get into this kind of thing enjoy, and may in fact be looking for, highly inconsistent behavior. E.g. the thing that made the Twitch pokemon thing compelling to people was the very fact that most of the time it worked horribly, and out of that it could occasionally be made to work and progress. I suppose its similar to why people play/watch people playing games like QWOP, because the dysfunction is actually the intended goal.

So if that's the character of the audience it attracts, that predicts certain things about the kind of personality that would emerge in Bob. Whether or not that's a problem I suppose depends on specific goals - since that particular persona is a manifestation of the underlying bias of the internet community, the decoherence time measured wouldn't indicate anything about the specific transmission channels used in Bob but would just measure the timescale of change of the internet zeitgeist. Which is interesting but not what I was intending to try to measure.

If one would want to move away from that particular attractor, there'd need to be something that would change the type of audience Bob attracted. If there were money involved it might achieve that (Bob has a wallet with real money in it and you can try to have him spend money in certain specific predefined ways that prevent you from just having him mail you a check for all of it, and you get a royalty if your idea was chosen), but of course that makes it harder to actually go forward since it'd need some form of support through grants or whatnot.

Edit: The linked 'Kasparov versus the World' thing is a much older example of this idea, I guess. Kasparov enjoyed it at least!

endoperez
2014-06-03, 04:17 AM
After the initial thing, "Twitch plays" support will become better integrated into games. Several projects are already underway. Not just old games rigged up to Twitch, but new games designed from ground up for audience participation.

Choice Chamber was actually conceived of before the Twitch plays Pokemon phenomenom.

Daylight included some minor Twitch-triggered events.


Here's an article (http://gamasutra.com/blogs/KennethBlaney/20140301/212032/Twitch_P lays_Pokemon_and_accidentally_Popularizes_a_New_Ge nre.php) that points out Pokemon worked so well because it's a famous game, so the players could make informed choices and plan ahead.

ed: fixed link

Hubert
2014-06-03, 05:28 AM
How is selection via human voting the same as statistical consistency? Generally the problem with statistical consistency stuff (e.g. hidden Markov models, Bayes classifiers, etc) is that you can't actually sample the entire problem space with any sort of reasonable coverage, because that requires that the bot had the exact same conversation that its having now at some point in the past.


As I understand it, Cleverbot works as follows: for each given input, look for all human-generated response and choose the most appropriate based (I would guess) on what people replied most frequently to this input.

In your system, the output is only provided by a small group of people, but many more people will vote for the best answer. Voting can be seen as "I would have answered something like that to this input".

In both cases, you have human voting in my opinion. In Cleverbot's case, people are voting by providing their answer, in your system they vote by selecting one possible answer in a given set.

NichG
2014-06-03, 06:26 AM
As I understand it, Cleverbot works as follows: for each given input, look for all human-generated response and choose the most appropriate based (I would guess) on what people replied most frequently to this input.

In your system, the output is only provided by a small group of people, but many more people will vote for the best answer. Voting can be seen as "I would have answered something like that to this input".

In both cases, you have human voting in my opinion. In Cleverbot's case, people are voting by providing their answer, in your system they vote by selecting one possible answer in a given set.

Okay, I agree that that's kind of a form of voting, but in the case of Cleverbot its disjunct from state, which is a significant difference. E.g. when people 'vote' on Cleverbot, they're making a selection as far as 'what would be a reasonable thing to respond in general', given no context, history, current state, or specific intent to communicate.

A very extreme example would be the response 'yes'. You might find that via sampling, 'yes' is a popular response to 'do you agree with what I just said?'. But because of the stateless nature of that kind of sampling, the result is to imply that Cleverbot always agrees. Similarly, lets look at an example like 'Please compose a poem for me'. If a user has formerly composed a poem, then Cleverbot will generally respond with that poem because the fraction of Cleverbot's interactions that involve composing poetry is infinitessimal compared to its other interactions. A lack of sampling means that the response set is too small and increases the degree to which it appears artificially stationary.

In the 'Bob' example, upon being asked to compose a poem 'Bob' would internally generate 50 poems and then use voting to select which poem to respond with. This poem would be generated with the context of whatever conversation Bob was just having, as well as the context given by the hidden variables encoded in the user population at that particular moment in time. That is to say, if some horrible tragedy had just happened in the world, Bob's poems would be influenced by that because Bob's component users bring that current knowledge with them.

The other distinction is that in the case of Cleverbot, the users are not being directed to specifically emulate a particular personality, so any sort of personality Cleverbot has is basically averaged out. The question with Bob is whether or not we could create a feedback mechanism that 'locks in' a particular personality regardless of the fact that the underlying users are constantly being swapped out. E.g. for a user of Cleverbot the interaction is 'talk with Cleverbot'. For a user of Bob, the interaction is 'become Bob'.

Hubert
2014-06-03, 08:13 AM
I agree with you on that: Cleverbot is (as far as I know) memoryless, so you can fool it easily by making references to previous parts of the discussion.

In your system you will not have complete absence of memory like for Cleverbot, but you will always have sliding personality traits based on the people who submit the answers and the voters. For example, let's consider a basic question: "What is your favorite color?". When first asked, your system answers "Blue". You have no way to impose that when this question is asked again in six months, the answer will be the same.

NichG
2014-06-03, 08:32 AM
I agree with you on that: Cleverbot is (as far as I know) memoryless, so you can fool it easily by making references to previous parts of the discussion.

In your system you will not have complete absence of memory like for Cleverbot, but you will always have sliding personality traits based on the people who submit the answers and the voters. For example, let's consider a basic question: "What is your favorite color?". When first asked, your system answers "Blue". You have no way to impose that when this question is asked again in six months, the answer will be the same.

Basically this is the crux of the investigation. I'm positing that its possible to communicate enough information to enough users that this sort of thing converges. More generally, the question is 'what specific information channels are needed to make this converge?'.

I'll agree though that the more arbitrary the question, the more likely that the information is lost. This kind of thing is useful to know because it suggests where the lines exist between 'memory', 'personality', 'skill', etc. For example, lets say instead you asked something like 'do you prefer blue to red?' - does that information have a greater persistence time than asking someone to name a specific color? The hypothesis is that there is a variable amount of mutual information about trivialities like that encoded in the deeper parts of the personality. E.g. 'mellow people tend to prefer blue to red, so if Bob's personality is mellow he will consistently pick blue over red even though no individual user is aware of his previous response to the question six months ago'.

Note that humans don't have perfect persistence here either. If you ask me my favorite color, I'd guess that my own answer is reasonably likely to change on a six month timescale as well. But if you asked me something that was associated with a strong memory or important thing in my life, it might have a much longer persistence (e.g. 'my favorite color is X because of Y story'). If you ask me something really meaningless like 'what is my favorite number?' then the answers will change from day to day.

I think that sort of thing exists to varying degrees for varying questions, and what would be interesting would be to find examples of things that have particularly strong persistence in humans but weak persistence in Bob. What those kinds of things are would indicate something about how human intelligence functions.

Incidentally, one way to repair this particular decoherence would be to allow Bob to take a committee action to enter an object into a database. So basically there'd be a votable action 'whenever the phrase 'color' comes up, create a tooltip that shows the user 'my favorite color is blue''. Whenever it comes up from then on, a user could click to upvote or downvote that piece of data, and when the downvotes had enough collective weight then the data would be expunged. So it would almost be like Bob is consciously deciding what information is part of his long-term identity and what information is ephemeral.

Karoht
2014-06-06, 03:47 PM
I'm picturing 'the internet' playing DnD.

Player: "Okay, so I'm in a bar, sitting at a table, and the barmaid brings me my ale. What do I do internet?"
*moments later*
Player: "Huh. Okay. I snort the ale, after which I attempt to stab myself 47 times in the chest. With the ale mug."
DM: *rolls some dice* "You're dead. You only managed 23 'stab' wounds."

NichG
2014-06-06, 04:07 PM
I'm picturing 'the internet' playing DnD.

Player: "Okay, so I'm in a bar, sitting at a table, and the barmaid brings me my ale. What do I do internet?"
*moments later*
Player: "Huh. Okay. I snort the ale, after which I attempt to stab myself 47 times in the chest. With the ale mug."
DM: *rolls some dice* "You're dead. You only managed 23 'stab' wounds."

I considered using this to DM but it seemed like there might be too many problems. On the other hand, when it comes to using something like this to run a PC, I actually think there have been examples of that on these forums. I remember a 'co-op dungeon crawl' run on here awhile back where the PC basically literally heard the voices of everyone giving suggestions on the thread, and chose an action based on their suggestions. Individually the suggestions were pretty incoherent, but the PC did sort of develop a particular personality pretty quickly that ended up being reasonably persistent (though of course the person running the thread had a lot to do with that).

Icewraith
2014-06-10, 04:57 PM
I don't think you'll get a persistant personality, but perhaps observers will think "Bob" has a persistant personality since it should slowly change along with the people providing input over a long period of time? Assuming some sort of contributor rating decay over time, so people who lose interest in the project can't suddenly go back and try to vote with their original weight if a controversial question comes up, Bob's opinions on anything social/political will change with the times, but any sufficiently setting independant hobbies or habits he develops early on will persist.

Once the pattern is established, the things the character does may stabilize, especially with a strong emphasis on "what would bob do". Unless a sufficiently extreme personality (either extremely good or extremely bad) develops early on, Bob will change his mind over time.

Ravens_cry
2014-06-11, 09:58 PM
Isn't this basically describing any organization with changing membership but a group identity? Like governments, or sports teams. The king is dead, long live the king, the Vancouver Canucks made the finals in 1982, 1994, and 2011, even though few, if any, of the people involved were the same.

NichG
2014-06-11, 10:04 PM
Isn't this basically describing any organization with changing membership but a group identity? Like governments, or sports teams. The king is dead, long live the king, the Vancouver Canucks made the finals in 1982, 1994, and 2011, even though few, if any, of the people involved were the same.

The question is basically to find out what you include in a "group identity" - your Canucks example suggests that 'skill' is something you can preserve, but can you preserve memory, likes/dislikes, philosophy, personality, etc?

And if you can preserve all of that and choose a set corresponding to a specific person, lets say, have you basically made that person immortal?

Ravens_cry
2014-06-11, 10:37 PM
The question is basically to find out what you include in a "group identity" - your Canucks example suggests that 'skill' is something you can preserve, but can you preserve memory, likes/dislikes, philosophy, personality, etc?

And if you can preserve all of that and choose a set corresponding to a specific person, lets say, have you basically made that person immortal?
The institution might be long lived, though immortal could be pushing it. Even the oldest continuously existing governments and nations are younger than the oldest trees. Certain things akin to what you mention can be passed on, in the form of, for example, tradition, culture, and precedent.

NichG
2014-06-11, 11:57 PM
The institution might be long lived, though immortal could be pushing it. Even the oldest continuously existing governments and nations are younger than the oldest trees. Certain things akin to what you mention can be passed on, in the form of, for example, tradition, culture, and precedent.

Well, immortal in the sense that it doesn't have a fixed sell-by date :smallsmile: Not 'it will literally exist forever', but 'the timescale of its persistence in principle can be extended arbitrarily'

Anyhow, the thing about culture, tradition, and precedent is that they don't actually have a persona behind them - they're abstract information. What's interesting to me is the idea that personality itself is a form of information and maybe it too can be copied and extended the same way you can copy and extend an electronic document or a set of traditions. That particular question is tricky I think because the information underlying 'personality' is so nebulous and ill-defined - we recognize personality as a thing, but unlike a codified set of traditions we wouldn't know how to record it, duplicate it, etc.

So at the stage we're at, to talk about copying a personality means that somehow we have to do so organically, and as we figure out what works and doesn't work then we get closer to actually understanding the ideal structures to use to encode personality directly. In some sense these are old ideas - the idea of an artist gaining 'immortality' through their work, for example, is related. But I don't think that experiments have been done directly on this, to ask 'is an impressionist more immortal than a surrealist?' or 'is a novelist preserved better than a painter?' or even 'can we actually extract out/transfer personality with enough fidelity to interact with it and discover things we did not explicitly know about it already through the interaction?'

Ravens_cry
2014-06-12, 03:04 PM
Actually, I would say that group entities like nations and sports teams do have certain personality traits. Some teams (and nations, but we'll not get into that for obvious reasons) are known, for example, as being aggressive and this legacy can pass on even as the team's membership changes, the whole being greater than the sum of its parts. Whether there is a qualia, a true I, is pretty irrelevant considering the only 'I' we can each say exists for certain is our own.

NichG
2014-06-12, 07:20 PM
Do you know if anyone has specifically measured the persistence in things like sports teams? Like, what is the timescale on which the team changes its 'personality traits' compared to the timescale over which it changes members? What particular aspects of the team interaction do you think are responsible for retaining the persona? Is it something about the players inducting new members into their ranks, does it have to do with the coaches (e.g. a small number of entities that stay with the team longer than individual players), or does it have something to do with the overall public perception of the team (e.g. the public reinforces the traits because players come in knowing the reputation and try to uphold it)?