Take a spreadsheet. Then take a group of people (whoever you can get) and convene a meeting. Decide what the options are, put each one on a row of the spreadsheet. Decide what the criteria for selection should be, and make each one of those a column. Now, discuss the blank cells on the spreadsheet until they all have numbers in them.
So far so good (perhaps). Now, not all of those criteria have the same importance, do they? So you need some way to handle that, to give different weight to each of the criteria. Another number then, one for each of the criteria – a multiplication factor, probably? This is getting tricky, the group doesn’t seem to agree on what those numbers should be. Averaging will sort that out though.
The spreadsheet is giving the answer that most of the group expected – so that’s alright then, job done. Or… it isn’t. In which case perhaps some of these numbers are wrong? Let’s see what happens if we double the weight on this one…
The world of MCDA (Multi Criteria Decision Analysis, since you ask) has one joke:
The point is – you can’t just make up your own way of adding up different things and expect to get a sensible answer. If you are just ‘using a spreadsheet’ to compare options, you might be doing it very well or very badly, it depends on how you have handled the scores and weights. This stuff is well known and understood, it’s in the text books and it’s in appropriate software tools. It can even be done very well in Excel. But unless you know what you are doing, your results are going to be at best unreliable, and at worst just plain wrong.