Perakp wrote:Instead of all mobs having their own say.dm, try to have one say.dm that works for everyone.
what no this is a terrible idea.
Have the different filters be accessed through flags instead of hardcoded lists.
I don't even know what you mean by this, since filter can mean so many things.
Instead of recursive mob check that checks for mobs recursively and then checks if they have a client, just go through the list of players like is done in play_sound(). Just make sure each mob has a well defined turf they are listening on even if they are in a box in a backbag in a locker (pAI mob in pAI item 2014 the dream).
PERHAPS. (btw pai mobs are on the pai item's turf because they couldn't see emotes, not because they couldn't hear :^])
MisterPerson wrote:[ ] Needs to handle say, binary, ling hivemind, alien hivemind, emotes(maybe), whispers, and dsay. Radio messages are complex and need to consider the radio controller.
[x] OOC communicaton is to be left as-is. That means OOC itself, ahelp, pray, PR announcements, admin announcements (not event announcements) and Asay.
[ ] Ideally say() would be atom/movable level so vending machines and bots can use this, although to start with just doing mob/say() is fine.
[ ] Instead of having the sayer recursively tell everything in range that they said something (and then having every radio do the same thing for radio messages!), sayers should tell every listener they said something and have the listeners test if they can hear that sayer somehow. This will solve the pAI and brain issue, plus will make radio messages much cheaper.
[ ] Listeners should be any atom/movable. No object-specific behavior needed other than hooking into the listening-system and whatever that object does with heard messages. atom/movable/proc/Hear() is a must.
[ ] Different levels of listening so we can have different channels. Alien hivemind vs ling hivemind, etc. Adding more channels should be as easy as possible.
Probably other issues, but that's just what I can think of offhand.
I'm hoping I won't have to touch radio code too much. Channels (as in, innnate channels like binary, lingchat and alien hivemind) could MAYBE be done with a "channel" datum that is referenced in a list in the mob or mind and contains all people with acces to that channel? Dsay hopefully won't be too hard.
I DO plan om making say() atom/movable level, but at that level little will be handled, for the point that you said right after this. I do plan on giving everything a hear_say() proc. The main issue here is wether to pass a "raw" message (only the text that is originally passed as an argument to say()) or a "processed" message (full text that shows up for the user). First allows much more flexibility, but is slower than the second one since every hearer will have to process the message themselves. The way to go here is mayby to pass both?
And as I just said, hearing channel messages would probably happen differently from hearing "normal" messages. Maybe something like channel_hear(message, channel) (where channel is a bitflag)
ON BITFLAGS AND LANGUAGES:
I plan on implementing languages properly, which will just include a "languages" bitflag that all hearers have and that will be used to determine if people can understand it (just a simple if(message_langs & languages), special behavior for hearing different languages can be done in else if()s to maximize peformance)
One problem with a Hear() proc is that they would all have to compare distance to the sayer, because things like intercoms can only hear you if you're right next to them etc. One solution to this is to do a lot of list logic to make a range_4 list, a range_3 list, etc. The other solution would be to just have the things that just need to be next to the sayer do if(adjacent(src, sayer)), but that is less flexible.
suomynonAyletamitlU wrote:I started a rewrite of say from scratch for my homestation project but I haven't done enough to say that anything really got done.
The thrust of it was a pair of datums, channel and speech, each of which absorbed part of the complicated nonsense. The speech datum did things like speech filters, and saved information about the speaker in case they got exploded while the speech/radio was processing. The speech would also keep a list of who had already heard it, so you don't hear it multiple times from multiple sources; that could apply equally well to open microphones as well as mobs.
Part of the speech datum was that I was using callback functions (delegates, etc, using the byond call() proc; I have a datum for it, not using it in production anywhere though) to format the speech, so instead of checking whether or not you have some particular status, you just add or remove callbacks from a list when the mob status changes and use the list to format your text. It also handles the multiple species/languages/encodings stuff I guess.
The channel datum was responsible for formatting the string (x says y, x whispers y, x [radio channel] says y, etc) and for determining who hears it. Emotes (silent or vocal), whispers, regular speech, radio, etc, are all different channels. I was considering having the literal radio channels use the same datum but I never got far enough to test if that was a good or horrible idea.
The nice thing about the datums is that where things overlap, you can reuse them; for example, talking into your headset is a whisper so it would be a subtype of whisper, speech and whisper share a lot of code, etc.
But as I say I never got very far on it. It was also gonna be a lot easier since the whole project was from scratch and I could ease into it all...
Sounds very complicated, and say() should probably be both fast and somewhat simple.
As for emotes, I would probably handle these by splitting emoting into audible_emote() which would just call Hear() (or Hear_emote()), and visible_emote() which would just call visible_message.
Compartementalization would be an important part of the new say() code. All mobs would have a can_speak() proc that is checked against first, then the message is passed into process_message(), which handles filtering through masks and such, then determine_verb() which determines wether it should be "asks" or "yells" or w/e, etc. Hear() will probably end up with a fuckton of arguments, and I'm still not entirely sure if that's a good idea.