Visitor Messages

Showing Visitor Messages 1 to 20 of 5440
  1. Zsych
    Yesterday 10:21 PM
    Zsych
    (I seriously need to fix my preference for defense over attack )
  2. Zsych
    Yesterday 10:09 PM
    Zsych
    I was thinking about how one could make it scalable, but my brain rejects it as being worthless in this problem domain. Especially if your attacking nodes need to be carrying out relatively simple or repetitive behaviors.

    For complex tasks, you'd probably want to reduce the tasks into high level sub-tasks which are in turn broken down and distributed again.

    Cyber attacks are a domain I've almost never cared about beyond basic DDOS style attacks, and even then I'm more interested in finding a provider that has decent defenses built in so I don't have to worry about it.

    ... (Alright, I actually have thought some about how to defend against such things on a larger scale, but those ideas are parts of other larger ideas I'm sitting on until I find a way to do something with them )
  3. Zsych
    Yesterday 09:15 PM
    Zsych

      Originally Posted by Monte314
    I'm thinking about tasking cues: "There is work to be done over here...". Think roving sharks picking up blood in the water. There is no centralized control; cues to act are ambient, and response autonomous and elective. This is an element of "swarm intelligence". A flock of a thousand birds can exhibit complex group behaviors, yet each bird is autonomous, and might only be aware of those nearest to it. Or, the odd phenomena of lightning bugs synchronizing their signaling, and cicadas group-modulation of their mating calls. There are many examples.

    I wonder... Are you thinking autonomous robots that can coordinate their activities? Is that where you need decentralized control, since the actual tasks that need to be done would be environment dependent? In that case, I could see using more limited processing ability in the individual robots being used as a grid / cloud of sorts to resolve more complex issues that are applicable in that area... without the overhead of long distance communication... that could potentially be jammed (you're not going to easily jam line of sight communication among little robots, so as long as they know their goals and have some ability to resolve them, it could work)

    I wonder what kind of optimizations become more relevant for actual real world objects that are physically separated. It strikes me that there may be common patterns for group organization that could be applied to solve certain kinds of problems more quickly (rather than attempting a completely general problem solving solution). It also occurs to me that even if you had a learning AI, if you developed the right training environments, you could do a lot of pre-processing to find valuable common patterns, that could then be pre-coded into those systems as initial preferences... reducing the total number of options that have to be considered in solving a problem.

    -

    One thing I wonder... Normally, we don't try to design systems that incrementally solve towards goals in software (ignoring actual iterative algorithms - most of those still assume that the problem space itself is constant, and you're looking for an optimal solution for it). If you had multiple agents acting in a changing environment, you'd want to build for short term decision making and re-evaluation leading to more short term decision making... incrementally optimizing an environment towards some goal state (or would it be multiple goals with different priorities... In the much larger question of something like terraforming for example, you'd have many many sub-tasks being slowly optimized towards... working with investments in different environments on a planet, that may individually pan out or not, requiring decisions on making further effort or aborting and trying elsewhere - yet another random tangent that )

    In that context, you might find this old thread of mine mildly interesting.

  4. Zsych
    Yesterday 06:16 PM
    Zsych
    Hmm... taking the money idea further a little. One could develop centers / groups of problems and supporting things, that have a certain amount of money allotted to them and can thus pull resources towards them.

    In modeling real world transactions and money flow, you'd kinda be tracking task inter-dependence. An agent makes money himself when he does a task for someone who has money... which allows the task creator to move onto their next dependent task.

    This also raises questions of when a task's completion is becoming critical and thus when more resources must be expended on the dependent tasks to make sure that the overall larger task is completed on time.

    (kinda interesting theoretical questions - might have to figure out what kind of problems these ideas are actually useful in solving... I think these are issues that will become more relevant as AI advances and artificial agents take on more work)
  5. Zsych
    Yesterday 05:49 PM
    Zsych

      Originally Posted by Monte314
    Your "ant" model of opportunistic tasking is interesting. It occurs to me that this is exactly how Leukocytes function... and this is without centralized control.

    A lot of problem solving principles do seem to recur across nature. Single cell organisms can be quite interesting too.

    Like the bacteria that create foam. Apparently able to communicate and do stuff like commit suicide for the sake of the group (so that what's under can reproduce, explode out, and reach farther)... I think that viewing these single cell organisms as single cell is actually a mistake. These things are multi-cellular - just with the cells not tightly bound together... and with somewhat more free opportunity to pull together towards common tasks (which in their case seems to be building that foam thing / environment... which sounds like a BS form of terraforming ... machines terraforming an environment into trash ... but then, what constitutes trash is a matter of perspective, so in fact, the tech itself is probably interesting and worth learning from)

  6. Zsych
    Yesterday 05:34 PM
    Zsych
    I wonder what kind of tasks you're planning to support and how you plan to decompose them into chunks that resources can be thrown against... Also wonder what system requires that to be done automatically.

    Even so, I believe Amazon recently came out with a solution of sorts for those kinds of scalable problems that was in their recent Invent conference. I'd suggest checking out the info on that.

    (Just FYI, both Amazon and Google are turning out amazing Cloud tech... Google's BigQuery for example... Kick ass!!!!)
  7. Zsych
    Yesterday 04:41 PM
    Zsych
    In tasks of varying size and priority, I think the "money" model might not be a bad idea. Task resolution is worth $1 million / day for 10 days. There are talented / less talented people (powerful / less powerful machines) available to address the problem, if they find it worth their time (say a minimum of $1000/day to be made by that resource, or its too small for it to address considering the overhead of resource allocation / messaging / etc. who knows - the minimum for a worker might vary too).

    ... That way, you pull together as many resources as make sense by the importance / value of the task itself.

    (That feels like a very vague solution - but then I don't know the actual problem space)
  8. Zsych
    Yesterday 04:18 PM
    Zsych
    What kind of status do you want to decentralize that can't be handled through replication?

    In an eventual consistency system, it might make sense for some (local) datasources to be consistently updated before others if there are some agents / subscribers that need to respond to that information almost immediately... But near instantaneous response is usually not necessary for... a lot of systems.

    That would also raise issues of priority queues for requests and directing priority requests to the data storages that are updated sooner... Or I could be guessing wrong.
  9. Zsych
    Yesterday 03:38 PM
    Zsych
    (cont 2 / independent) Anyway, in a computer science context, learning how to solve problems by understanding humanity itself, I'd model it as semi-intelligent resources with some capacity to work together, and to recognize higher or lower priority goals to attach to.

    Notice how lots of people run after where they see more opportunity for money to be made - its a validation of sorts, of the value of actions in that category. Money is an exchange of value among humans, and the opportunity to make money is not just the opportunity to get for yourself, but also to offer significant amounts of value to the system, where it is needed.

    So one question would be: Is this distributed peer to peer spread of information and prioritization of tasks, better than a more centralized model? When is it better than a centralized model? (IMO, that usually goes to whether the problem is well understood from the center or not, and if there are main overriding goals that are clear - rarely the case among humans, and the complexities of real life problems.)

    ... But even so, such models could be valuable in helping humans understand how to better self-organize depending on the problem spaces they are oriented at dealing with (since methods of organization impact efficiency - and one needs different styles to solve different kinds of problems in a more efficient manner..... And efficiency in turn matters, because resources wasted are resources that can't be used to do good elsewhere, thus limiting your growth and your ability to create a better future for future generations )
  10. Zsych
    Yesterday 03:29 PM
    Zsych
    (cont...)

    In that regard, I personally consider some of those emotional response functions to be akin to a form of built-in law enforcement / error correction function in the group... Where projecting insolence for example 'That you believe you've acted badly, and fully expect to be able to avoid accountability' evokes more anger (forceful problem resolution) than if the person is remorseful and inclined to self-correct... because this latter version isn't likely to become a cancer to the system if allowed to go unquestioned.

    ... Which again goes to larger questions of how a system can be resilient - maintain quality / get results - especially when earlier humans didn't have very advanced language (or, if we're going to take the religious view - then we could perhaps assume that Adam did have great knowledge, but that much of it was lost, thus the simpler form of historical evidence we see of humanity's behavior through much of history).
  11. Zsych
    Yesterday 03:29 PM
    Zsych
    (Aside from being very busy) That sounds interesting.

    Although, I'd consider it more an aspect of the process by which many separate goals are achieved by a larger pool of free agents that can gravitate towards areas where resources are needed (with various goals being added and removed at all times, and some consistent rules also in play, like the need to keep the resource itself alive and functioning) --- rather than just distributed situational awareness. The distributed situational awareness exists to help solve problems...

    ... And generally, if something "needs" to get done, then who does it isn't that important... You also have to consider earlier humans being subject to far harsher survival constraints (which should actually make one wonder what additional programming normally kicks in when danger for the group is involved - since that implies urgency for problem / threat resolution... and thus likely overrides and additional optimizations to get results out of groups).

    As a parallel to this discussion, distributed intelligence and problem solving is actually found in other creatures as well. Ants for example, will prospect new places to create colonies, and seem to approximate analysis of many different criteria... with different ants going to prospect different places, and then bringing the information back to the center, with the colony as a whole then taking action (even though the ones prospecting various regions likely didn't go to prospect the others - so decent amount of communication of sorts going on to combine the results of the different assessments into meaningful action)

    Naturally, humans tend to not naturally view humans from the perspective of what deeper technological principles are incorporated in our overall design. There's also a problem in this case, that you're dealing not with concrete things, but rather with composition / aggregation of a larger data set... Most humans aren't that into processing and deriving meaning from large amounts of data (and depending on the problem spaces they are oriented at dealing with, that preference may not be required, nor be an efficient direction for optimization of skills)

    -

    Anyway, going off on another tangent... The Ferguson drama. Where my idea here could certainly be wrong... On one side, the kid that got killed had recently committed a crime. On the other, the police officer didn't know that. However, humans will typically project (through body language) their assessment of their situation / place in relationships. A recent criminal has more reason to display those indicators... Kinda like a child that knows he has made a mistake... You don't know what he's done, you just know that the child feels he needs to be punished... Whatever his words, his underlying programming is revealing that actionable information (as part of older human tech that helped our ancestors get along together when they lacked as complex societal systems or language).

    (cont...)
  12. Zsych
    Yesterday 01:36 PM
    Zsych
    I was thinking about what the application of a preference for small talk would be from a system design point of view - why such a quality might evolve (or be programmed, if you go with intelligent design). From a larger perspective, humans are like a pool of resources that are getting pulled into serving various goals... Goals they usually can't solve individually. Which forces collaboration (sometimes like a P2P distributed Map/Reduce kinda system. LOL). But anyway,

    Among other things, small talk improves resilience in the system by helping people keep track of smaller shared events in the system.

    Now humans will often aggregate events when they talk to people about what's happening, but in small talk, you get access to many of the pre-aggregation events as they happen, leading to you also being able to develop a more complete picture of things internally (outside of the aggregate / summary that other people might normally provide). This is naturally more relevant in cases of greater interdependence where you may have to take over responsibilities of the other party in dealing with common problem domains (system resilience against node failure... Like if someone fell sick)

    Of course, this is still a simplification. Human conversation has other aspects of distributed problem solving, like distributed error checks as well (which creates it's own batch of complications too)...

    Anyway, I was amused, thinking of small talk in the context of pre-aggregation system events, for the sake of shared modeling of relevant details, for the sake of increased redundancy in the system (even if it also has other applications)
  13. Zsych
  14. Dung
    11-25-2014 04:57 PM
    Dung
    I was just kidding you. I know you are a scholar and always mean well, Monte.
  15. Dung
    11-25-2014 04:50 PM
    Dung commented on Future looks very bleak
    That's rough.
  16. Idiotes
    11-25-2014 09:58 AM
    Idiotes
    Thanks. I don't get to use it for much at work, but hoping to get my book finished in the near future. I like writing as a hobby - really enjoy it.
  17. eagleseven
    11-24-2014 06:39 PM
    eagleseven commented on Well Fuck
    Didn't expect to be dealing with it so soon. And I know, neither did you.
  18. wolkenkraetzer
  19. Mikk
    11-24-2014 06:16 AM
    Mikk commented on Random Thoughts Thread
    because Bark Waiter!
  20. GhenghisKhan
    11-24-2014 01:35 AM
    GhenghisKhan
    Hello jedi math dog aka monte.

    I like welcome. I also like jedis, math, and dogs.

About Me

  • About Monte314
    Biography
    Jedi Math Dog!
    Gender
    Male
    Location
    Melbourne, Florida
    Interests
    Math, Astronomy
    Occupation
    Chief Scientist, Professor
  • Personality
    MBTI Type
    INTJ
    Astrology Sign
    Scorpio
    Brain Dominance
    Left

Statistics

Total Posts
Visitor Messages
General Information
  • Last Activity: Today 06:06 PM
  • Join Date: 04-30-2008
  • Referrals: 0

Friends

Showing Friends 1 to 20 of 492

All times are GMT -4. The time now is 06:59 PM.


Powered by vBulletin®
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Myers-Briggs Type Indicator, Myers-Briggs, and MBTI are trademarks or registered trademarks of the
Myers-Briggs Type Indicator Trust in the United States and other countries.