Possible to show aggregate votes in a tooltip on a markercluster?

Possible to show aggregate votes in a tooltip on a markercluster?

I'm building an interface that would take poll data in and show some results over a marker cluster using Mapbox but I don't have any experience with it.

The interface would look much like

If you click on the cluster instead of zooming in a tooltip would show up with a breakdown of (yes/no) votes in a pie chart like this">

The Leaflet markers clusterer can be used to show directly pie charts on the map. An example is visible on this map showing the breakdown of accidents in Oslo. The code behind is rather well explained and could be adapted for your case.

Possible to show aggregate votes in a tooltip on a markercluster?

The 1-1 sum of markers are just one type of aggregation.

What about aggregating a sub-property, in a sum, an average or something ?

UNION ALL implicit conversion for VARCHAR(MAX) column

Is an implicit conversion in a UNION ALL when one of the data type is VARCHAR(MAX) something to worry about?

If you look at the execution plan there will be a CONVERT_IMPLICIT warning:

(Scalar Operator(CONVERT_IMPLICIT(varchar(max),[@value1],0))), (Scalar Operator([@value2]))

Is this going to behave as a normal conversion i.e. convert all the VARCHAR(1) columns to VARCHAR(MAX) and have an impact on performance? Is there a way to avoid it other than changing the table structure

How to display data behind a data point in a matrix?

I'm trying to design an internal tool for software testing purposes. We have a system that consists of a number of servers. To verify that the system as a whole is stable we monitor different parameters of the system, such as memory, CPU, offline, backups etc.

Today we are doing such checks manually, meaning logging in to this system, checking different parts of it, to see if it is OK or not. This takes a lot of time, so the thoughts of this tool is to quickly see that all parts have been checked and for us to know where the problems are, so we can focus on them, rather on checking things that already are OK.

I've started with this wireframe matrix:

Green means everything is good and we don't need to dig into that area this time. Red on the other hand means a problem of some kind. For instance, the memory column will get a red dot if the system memory is increasing for many days (memory leak), but also if the memory is above a threshold it will be indicated as red.

To my question: How can I make it easy for the user to see why a certain server and area is marked red? Should I use tool tips when hoovering, or clicking the indication to show a new page with more information? Are there other solutions that I'm not thinking about?

1 Answer 1

I strongly believe that it would not be beneficial to the platform in the long run. In short, this falls into wanting more than voting was designed to do, introducing an anonymous feedback mechanism which does not encourage voting (potentially the opposite), makes a tighter link between feedbacks and downvotes, and without fixing the core problem of people taking downvotes personally. All of this under a principle that not everyone agrees with: that you should explain your downvotes (you should not).

I will quote some of the takes in the question, not with the intent to exhaustively counter-argue every statement made, but to clarify my stance described above.

The list of reasons on, created specifically for Stack Overflow is:

Being unresponsive
Image of an exception
Images of code
It's not working
No attempt
No code
No debugging
Missing exception details
No research
Too much code
Too much information
Unclear what you're asking
Unreadable code
Wrong language

We are unlikely to be in full agreement with that list. Even after a good deal of discussion, finding a list of reasons that every stakeholder would say "OK, let's go with that" would not be so easy. For starters, "not useful" is one of the key reasons presented in the downvote button's tooltip, and yet is not on that list. Some other reasons seem either the kind of feedback that is:

  • Very likely provided in comments already when applicable (Wrong language)
  • Incorrectly suggesting a hard rule to downvote (No code)
  • Not even based on the content of the post (Being unresponsive)
  • Already provided anyway, once the question is closed (Unclear what you're asking, No MCVE)
  • Possibly not different enough to justify a separate item (Image of an exception vs Image of code No attempt vs No research, No code vs No MCVE vs Too much code).

Not to mention that is not an official resource and is not even that well endorsed here. Comments linking the site are known to disappear quickly, in spite of the useful information within.

They are an overly simplistic rating system. Requiring a specific rating (unclear, incomplete, off-topic, etc.) would improve the quality of the rating system. A simple linear scale is not a good fit for "question quality" or "answer quality", when there are so many different ways a question can be "good" or "bad".

A linear scale is eventually required for knowing how to sort answers by vote, and when to hide questions. The part where it improves the quality of the rating system is subjective.

This entire paragraph is false. "This question does not show any research effort it is unclear or not useful" does not explain what a downvote means, and is not specific, because it gives three possible explanations and does not specify which (if any) apply. "This answer is not useful" is again completely nonspecific. Neither message "adequately explains the logic behind the downvote", because neither message adequately explains what is wrong with the question or answer.

A downvote is not meant to convey a specific explanation. In fact, as already mentioned, "not being useful" is indeed a reason to downvote, even when it is not necessarily actionable. If this option is not listed, then you are basically passing a different, incomplete idea for when to vote down. If it is listed and people use it, the authors will still be upset anyway, at the risk of perceiving this option as an insult to injury if they take it personally. This last matter is an education problem, not solved by any kind of feedback mechanism.

At worst, a user could pick an irrelevant reason. However, in that case, the author of the question/answer would be able to figure out that the feedback is irrelevant (and potentially challenge it). A downvote without explanation provides no information and cannot be challenged.

There is a very serious problems with this argument. Downvotes are not meant to be challenged. If a user legitimately found the post worthy of a downvote, then no justification is in order. The only exception comes from fraudulent voting, which is only rarely the case, and in that case you should reach out to a moderator via flags, not to the voter. If an author of a post sees the downvote, or any other feedback, as not applicable, they can just shrug it off (see Tim losing his keys). Even this assessment of whether feedback is irrelevant is not always best done by the author of a post, but this is only a matter of assuming good will first. Is someone suggesting that the question not have an MRE? Inspect the question for what you may have missed. Mistakes can happen sure, but so many times we witness askers insisting that their question is fine, to the point that they refuse to incorporate suggestions clearly given in text. It would not have been very different from comments in this regard.

Selecting a reason is a trivial amount of friction. Actionable feedback also separates good content from bad, and helps authors create better content. The expectation that "the 'swarm intelligence' of future viewers will eventually correct the problem" may be correct on average but that doesn't help individual authors that get downvotes with no actionable feedback. Some of them will improve enough to become part of that "swarm intelligence" if given the chance. If simply rejected outright, they will most likely leave.

Again, downvotes are not designed as feedback to the author. The idea of correlating a downvote with a set of reasons from a multi-list is. pretty much an attempt to make it a feedback mechanism, in a platform where the last thing people should be doing is to argue about whether a downvote is warranted or not. The rest is just "the OP wants feedback", which unfortunately is not always true, as already hinted in the FAQ.

Downvoting already carries a reputation penalty, but upvoting doesn't. "Consequence-free upvoting" without "consequence-free downvoting" is the status quo, which the FAQ is supposedly defending. [. ]

Hold up. Voting has been deliberately designed to be frictionless, be it up or down. The penalty for casting a downvote on answers (previously on any kind of post) only existed as an attempt to prevent abuse. Eventually, as the site realized that we should be optimizing for perls rather than sand, the penalty was removed for downvoting questions. This penalty has also been recently indicated as one of the things discouraging downvotes. Nowhere does the FAQ defend this penalty, or explicitly claim to defend the current state of things "just because".

What is a user who has read the documentation, but still gets downvoted without explanation to do?

Sounds almost like a rhetorical question to me. Just a user is the answer. That they may feel that they should have the right to an explanation is a problem of false expectations.

Ratings, like votes, would still be anonymous.

At least that, but it wouldn't prevent comments like "The blind downvoter who said there's no MCVE should have their eyes checked". Insults of this sort are found (and deleted) still fairly often, but they are still bad and cause harm to the already scarce curator base. This proposal would not change that unless people are properly educated to stop taking downvotes so personally, and stop trying to find or extrapolate who downvoted. Or again, trying to challenge downvotes. This feature would emphasize the correlation of downvotes with feedback, thus being counterproductive to this goal.

The only way to prevent this is to enable this form of feedback without having to vote, but that in turn would be paradoxal, because selecting these options would suppose that it makes sense to downvote the post. There would be a risk of people doing one and not the other, thus not downvoting where it's due. It would at least make an interesting research topic to know how often this would happen. Hopefully without much head rolling in the process. At best, give multi-select-like feedback with no voting required a trial for a month, and let SO withdraw some conclusions from there.

2 Answers 2

I found leading by example, the best way to go ahead with this.

Initially, users were very prompt at downvoting and closing off questions very fast, without a comment in DataScience SE too, which is very bad for a new site(It was, then.), cause it again takes the same number of reopen votes for reopening the question, after the OP has figured out what's wrong in their question and did the necessary edits.

So, an alternate approach is, first leave a friendly comment, like:

Welcome to <> SE. This question would most likely be closed cause < mention the reason properly and clearly > , and give the OP some time for revisiting the post, and properly edit it.

If the OP still doesn't edit/improve the post to fit into the community's policies, then by all means, go ahead and close it off.

Having said that, posts which are blatantly off-topic or which looks like copy-pasted homework or which displays significant lack of effort(like, basic error googling, etc), should be closed down.

However, don't expect all the users to be as constructive as you are. It will take time for them to see how good constructive criticism is, and how it is, thereby improving the site as a whole.

PS: Some OP's who were given time to improve their first posts, are now regular users in DataScience SE, and are giving the same welcoming and moderation treatment to their fellow users now. And a site which was supposed to be dead long back, is now close to graduation :).

So, the culture will catch up, but it has to start with a person or two.

I assume you put under Destructive Criticism every down vote or vote to close action.

Indeed being accustomed to Stack Exchange (SE) rules applying to mostly every site include a bias, which Shog9 did nicely resumes in his comment here:

You can't leave your memories at the door even if you wanted to, @Stijn. Instead, try to remember why we do things the way we do them elsewhere. And if you don't know, find out before you blindly repeat the rituals here.

The reasons to downvote a post are in the hovering tooltip:

"This question does not show any research effort it is unclear or not useful"

They are pretty subjective and everyone is free to act on his/her personal feeling toward a post. Commenting to let the author know the concern is not mandatory, I do think it is constructive.

Next voting to close a question is actually an indication at least 5 users think there's a problem with this question. For too broad, which I assume is your main concern, that's because those users think it won't have a useful outcome.
Either because answering the question can't fit in the format of the site and would be better addressed in a blog post, or because it is not scoped enough and answers will start to pile until finding the gem answer within all will just hide it Example here of something where after a while it's absolutely unpractical.

You may dislike it, but closing a question is already a constructive thing, enforcing the asker to search why the community think its question is not a good fit for the site.

Goal of SE sites is to build a knowledge base which should not fade off with time whereas possible. Questions asking for list of things are likely not give anything useful on long term, but Robert Cartaino explain it better than me in the first part of his answer here.

Stack Exchange works really well when you ask very specific questions about a problem you encountered in your day to day work — something that can be at least somewhat definitively answered as rankably "most correct."

So to get back to your exact question:

Leaving a comment explaining the reasons of a down vote or a close vote, specially at this time of 20 days old site, is a constructive behavior in my opinion as it helps discussing the concerns. That indeed save energy, as if the author doesn't like your opinion and don't review its question you won't have to get back periodically to check and finally cast your vote if nothing changed. Moreover when you comment the author can ping you after reviewing its post with @nickname to let you know a change has been made.

This had already happened, with the help of a meta question for guidance here

How to use WMIC to connect to remote machine and output OS information to a file?

I want to know how to use WMIC to connect to remote host and output their PC's OS information(Installed programs list) to a file.

but i got error "Node - <IP> Error: Description = Invalid query"

But Actually, i think i need to be domain administrator for permission. so i tried

I got error: Invalid Global switch.

I want to get all of PC's OS information( installed programs list) by using WMIC. (i think i need to access with domain administrator cause all of PC is joined with domain) and im an administrator

Please. help me ㅠ____ㅠ ah, it seems to be ok when i tried with just my PC.

WMIC OS get name, vendor, version >> C:\%computername%.txt

if i did like above, it is ok to get txt file. but i want to remote all PC and get an information file. @[email protected]

one more question>> is it connected with Security policy or Group policy something? or Firewall something. @[email protected]

The Exclamation mark in a speech bubble is not an indication of an error, but shows which audio output is configured for system alerts and sound effects.

Having said that, I've experienced problems with external audio after sleep on my 2018 Mini. I suspect it's a bug. I have to kill the coreaudiod process to get things working again.

Then enter an admin password.

If you are changing the Sample rate to get things working, then it may be that doing that restarts coreaudiod . (Assuming it's the same issue.)

Note that some audio apps may crash or get confused if you change the sample rate while they are running.

1 Answer 1

I like the idea of a light-weight outline/ToC in the sidebar. You actually get this when editing, to help you keep track of what's been changed:

Having this in the sidebar of the page could actually free up some room in the editor, which would be nice.

The action icons I feel less strongly about after 8 years, I'm used to the text links on Q&A, but plenty of folks still get confused by them. Terms like "share" and "flag" don't always mean the same thing (and in fact we've changed the names or meanings on Q&A at various points) - heck, even "comment" is potentially misleading. So while I find the iconography annoying right now, I have to admit that it consumes less space. And may not be any more confusing to new users than the text actions on Q&A.

As far as adding comments to examples. I think this would make the page even more cluttered. The primary mode of interaction with Docs is intended to be editing if folks start dumping suggestions or errata into comments, that falls apart. I do suspect we're going to need some sort of a "talk" page for these topics eventually though, something that aggregates improvement requests and associated discussion so we can resolve disagreements over goals and edits for a topic without getting into edit wars.

  • The -F flag sets the field separator I put it in single quotes because it is a special shell character.
  • Then $1

Note that since you're not really using a regex here, just a specific value, you could just as easily use:

Which checks string equality. This is equivalent to using the regex /^smiths$/ , as mentioned in another answer, which includes the ^ anchor to only match the start of the string (the start of field 1) and the $ anchor to only match the end of the string. Not sure how familiar you are with regexes. They are very powerful, but for this case you could use a string equality check just as easily.

3 Answers 3

Individual users can downvote for any reason --that's just how SE is set up. However, the reputation rules are designed to prevent vendetta downvoting. There is a personal cost to frequent downvoting, and downvotes only remove a fraction of the reputation gained by upvotes.

In my opinion, responsible members of the community reserve downvotes for important reasons, which they explain in comments, but that only emerges in the aggregate. Any individual downvoter is as anonymous and as unknowable as any individual Wikipedia editor.

I don't know about the exact context of the question, but if you've actually seen a user down voting based on personal opinion know that it is forbidden. Seeing as question should not present too much personal opinion, a voting based on opinion is obviously wrong. If you encounter such behavior, let the moderators know.

Unfortunately the question didn't get any up votes so I hope people would see it anyway, as I think it's very important to know how to behave around here.

I have made three down-votes, total, on this SE. For all three I gave a comment why, but I quickly realized that was not the most effective way of responding to posts I did not like for the following reasons:

I lost one reputation point for each of those three down-votes. So the site rules punish me for down-voting. Losing a reputation point should be a wake-up call for anyone down-voting.

The down-voted post lost only two reputation points. An up-vote would have given the poster five points for a question and ten points for an answer. One up-vote on an answer is the same as five down-votes on that answer. Again the site rules do not encourage down-voting.

The down-vote doesn't lead to any further action. A flag, on the other hand, sends the post to a review queue or to a moderator for action leading to a possible deletion. A down-vote is just a down-vote.

If the user disagrees with some assumption, a down-vote without an explanation in a comment doesn't clarify that assumption. It is ineffective except to annoy the original poster. I have no interest in annoying people.

So, I don't see why anyone down-votes on an SE site given the current rules, but sometimes people don't realize the options they have to deal with posts they do not like.

But, in all honesty, is that really necessary on a philosophy question site? Even if the question is insane, philosophers may be able to clarify the mistake. Not just down-vote without explanation.

Given the above I don't see the necessity of down-voting on any SE site, but I do see the necessity of commenting, flagging and voting to close or delete posts.

2 Answers 2

So here's some stuff to get you started:

We'll start with a more general candle image function:

Pull a test dataset out of what you provided:

Make a function for plotting these candles (we'll see why later):

Get a moving average line:

Now is where our possible paths branch:

That candlePlot is pretty fast, but not blazing fast, so we can provide a faster version of it as a single line for our dynamic edits:

Then combine these for a slow and fast version:

Then we'll just stick this all together with appropriate axis scaling and shifting (plus that line drawing thing you wanted):

By changing the candle width when the xrange changes we can ensure a consistent candle appearance.

Note that you can improve the quality here by changing that fastPlot assignment in the DynamicModule initialization to slowPlot (the commented out one). It'll look a better, but will be much slower to shift and draw.

Looks like this in the end (where I've done some x-shifting and scaling and drawn a trend line):

Obviously this is in no means a perfect drop in for the thing you wanted, but it shows you how to go about it I think.

And if you, like Kuba, don't want to copy all of those sections, here's all the code at once:

Watch the video: Stemming in NLP. Porter, Snowball, Lancaster Stemmers