Cells can contain two types of hyperlinks. There’s the embedded kind that you create using Insert – Hyperlink and the formula kind that you create using the HYPERLINK function. The function kind is nice because you can make the address and display text dynamic without using VBA. They’re just text arguments to a function and any function that modifies text can be used to modify them.
If you have HYPERLINK in a cell, the Insert – Hyperlink control is disabled (grayed out). Excel is wise enough to know that you shouldn’t have both kinds of hyperlinks in a cell. But it’s only half wise. Excel does not stop you from entering a HYPERLINK formula in a cell with an embedded hyperlink. If you do, you can end up with what seems like two hyperlinks in one cell.
I say “seems like” because Excel only recognizes one. And to be even more precise, it recognizes pieces of both hyperlinks to make one. Let me explain. If I type a URL in a cell, Excel converts it into a hyperlink. (Pro Tip: Press Ctrl+Z immediately after the conversion to undo the conversion, but keep the text). Let’s say I copy that down a few cells.
Now let’s say that I edit these cells to contain a HYPERLINK formula with a different address and a different display text. In this case, I’ve change the address by adding “my” in front of it and change the display text from the URL to the word “blog”.
If I hover over the new hyperlink, check what happens. There are three important properties of hyperlinks: Address (where it goes when you click), Text to Display (what shows up in the cell), and Tooltip (what pops up when you hover). With two hyperlinks, it appears that the Address and tooltip are driven by the embedded hyperlink and the Text to Display is driven by the formula.
I don’t know why it happens this way. I can’t even come up with a good story about how it’s an unintended consequence of some design decisions on Microsoft’s part. But it is what it is. I don’t know of any quick way to fix this through the user interface, but I wrote a macro to fix it.
Dim rCell As Range
For Each rCell In Sheet1.Range("A2:A10").Cells
On Error Resume Next
rCell.Hyperlinks(1).Delete
On Error GoTo 0
rCell.Formula = rCell.Formula
Next rCell
End Sub
The code removes the embedded hyperlink and leaves the formula. The line that sets the formula equal to the formula is get the blue underline formatting back. The traditional hyperlink formatting disappears when you delete the hyperlink even though the formula remains. You can see that the tooltip now draws from the only remaining hyperlink, the formula one.
Howdy folks. Jeff here again, with my musings on the kinds of things you might put in a business case for a corporate Excel training program.
I think corporate-wide Excel training is too valuable to leave to a Learning and Development team and/or to chance (assuming an organization is large enough to have an L&D function in the first place).
- L&D likely don’t know that much about how business users are misusing Excel, so how can they offer generic training at arms-length to remedy said misuse? At the same time, they must support numerous departments, with training being just one aspect of what they do (the other is meetings. Very important meetings, I’m told) and with Excel being just one of many programs that users need training in. So L&D simply can’t specialize in Excel training to a level that’s really going to make a difference at the coal face.
- The efficiency dividend from training accrues to the units of the people being trained. So units themselves are directly incentivized to invest if it will get them a more optimal outcome…regardless of whether fostering increased human capital falls within that unit’s core job/mandate or not.
So instead of business units saying “It’s not our job to train…it’s theirs” I think they need to be thinking “We’d rather someone else do this, but we’re prepared to fill in the gaps ourselves if it helps our people to solve their business problems in ways that increase quality, improve productivity, and provide higher returns on investment.”
But what would a corporate Excel initiative look like? And how would you sell it to an organization that isn’t aware of the problems with the status quo?
I thought I’d have a crack at putting together a generic business case that addresses these questions. Not put together with any specific organization in mind…rather, this is something gleaned from practically every organization I’ve ever worked with/for, as well as from my time spend moderating a few Excel help groups and such forth.
Love to hear about suggested rewrites, omissions, etc in the comments.
Executive Summary
- We believe that additional investment in Microsoft Excel-based training will help users solve their business problems in ways that increase quality, improve productivity, and provide a good return on investment, while decreasing some key data-related business risks.
- Consequently we propose to instigate a training program above and beyond that currently offered by Learning and Development that is primarily focused on educating users on how Excel’s deep feature set can be better utilized to solve common business issues while increasing data integrity, utilizing approaches that users would otherwise be unlikely to be exposed to; and highlighting routine practices that currently get in the way of increased efficiency.
- Given the large number of users involved, one-on-one diagnosis and training covering all users is simply not possible. Consequently this initiative primarily revolves around increasing the ability of users to self-diagnose issues and inefficiencies with their existing spreadsheet applications and practices, and to educate them on how to access resources and knowledge that will greatly assist them to self-remedy issues wherever possible. Given the resources available for this initiative, raising awareness of current bad practices and alternatives is possibly the biggest efficiency gain and risk-reducing opportunity that we can offer, at the least cost.
- Given Excel is the tool by which many people realize/express their commercial analysis, data visualization, business processes, and logic, we believe that in teaching people to use the tool itself better we can also foster increased competence in these fundamental skills/attributes.
Problem Statement
Currently our Excel user base largely consists of users largely at the basic level, with a much smaller contingent of intermediate users. Almost all uses are largely self-taught, often with little exposure to the vast wealth of constantly evolving best-practice resources that are available both offline and online. Consequently, despite being motivated to use Excel as efficiently as they can, even intermediate users barely scratch the surface of Excel’s productivity tools. At the same time, because some of Excel’s features are so very easy to use, those same features are also easily overused to the point of misuse. Consequently:
- The majority of users spend much more time manually formatting and manipulating data than they need to, often with little or no backwards auditability.
- Sometimes our approaches to analytical problems involve much more complexity than a more suitable approach, with little or no additional benefit or confidence eventuating as a result.
- Many of our business processes hosted within Excel spreadsheets are convoluted and unclear. ‘Mission-critical’ spreadsheets are often bloated, difficult to audit, and subject to catastrophic spreadsheet failure – possibly without users realizing.
- Modelling practices and spreadsheet practices across the organisation are inconsistent, and there is little or no peer review or development in place to ensure people use the Excel tool smartly, wisely, and efficiently.
- Best practices and learning are not perpetuated in a formalized way throughout the organization. The emphasis of our expert users currently remains fixing things that are broken, rather than using education to avoid bad practices in the first place.
While our Learning and Development (L&D) unit offer a number of appropriately pitched courses in place focusing on new/basic users, these are functionality-centric, rather than being business-specific. Consequently, such courses often don’t expose users to alternative practices. At the same time, L&D staff are unlikely to be fully cognizant of the vast amount of quality free resources available online, as well as some paid offerings that may prove more cost effective than the traditional course vendors that we have previously used.
As a result –
- Most people keep using the very small subset of Excel’s functionality that they know about as a hammer on problems that aren’t nails but screws
- We don’t insist on nor foster nor seek to measure incremental improvement of our analyst’s skill-set.
- We allow users to build mission-critical applications every day in Excel with little or no discipline .
The status quo incurs a very real opportunity cost to the organization, given that –
- Advanced users can often do something in a fraction of the time that it takes an intermediate user;
- The automation capability of Excel is staggering;
- It’s not actually that hard to turn basic users into intermediate ones, and intermediate users into advanced ones, so that they can do things in a fraction of the time they currently do, if not automate it completely.
Desired state
We propose to train our analysts to become not just better Excel users, but also more effective analysts. Note that this initiative isn’t just about the use of Excel itself, but related disciplines such as effective data visualization, business process design, and logic. Given Excel is the tool by which people express their analysis, data visualization, and logic, then if we teach people to use the tool better, in doing so we will also give them some pointers in thinking harder about what they are using the tool to achieve.
The desired state that we seek to arrive at would be demonstrated by the following outcomes:
- Our people would leverage more fully off Excel’s rich feature-set in order to achieve better analytical outcomes with much less effort. At the same time, they should be more fully cognizant of common and potentially avoidable spreadsheet design flaws and pitfalls.
- Our people would better judge the appropriate amount of time and precision required for any given analytical task. They will be less likely to over-analyse/over-build, and more cognizant of the diminishing returns that often accompany increased complexity of approach.
- Mission-critical spreadsheets/templates would be more robust, simpler to maintain, and user-friendly, as well as easier for successors to understand and maintain. This should result in lessened business risk, via decreased risk of incorrect models and catastrophic spreadsheet failure; Increased model auditability; and a more consistent approach to spreadsheet development and use across teams.
- Once our people better realize the capability of the Excel application, they begin to actively look for other efficiency opportunities where they can leverage their new Excel skills, and share their approaches.
Approach
This initiative is largely focused on increasing human capital – which historically has been more the domain of the organization’s Learning and Development team rather than our own unit. However, we propose entering into this space due to the following factors –
- Reduction of L&D’s technical training capacity;
- The opportunity cost of not realizing some fairly low-hanging efficiency dividends;
- The cost of procuring external training, and the risk that such training would not have the long-term positive impact on current practices that we seek;
- The increasing time pressures and demands on staff – meaning increased barriers to utilizing external trainers from both a financial and a time perspective; and
- The fact that we already have a strong interest in developing training materials that encompasses Excel, Modelling, VBA, and Business Process Improvement.
The primary outcomes this initiative will deliver are –
- Increased efficient/effective use of one of our main business productivity tools – Microsoft Excel – and greater awareness of current sub-optimal processes and approaches.
- Education of users to self-identify efficiency/risk issues with existing spreadsheet-based processes/approaches and give them avenues and resources to address these issues.
- Facilitation of increased peer-to-peer learning and peer-review opportunities between Excel users throughout the wider organization.
In doing this, the initiative will take a multi-pronged approach:
- Remedial mitigation of mission-critical spreadsheets/processes
- ‘Best Practice’ efficiency/effectiveness education
- Peer-to-peer user group
- Identification, creation, and dissemination of useful resources/templates
- Evaluation of additional tools/external training opportunities
These are covered in more detail below.
1. Remedial mitigation of mission-critical spreadsheets/processes.
We will work directly on selected mission-critical spreadsheets and spreadsheet-bases business processes in conjunction with business owners to identify design issues and prioritise remedial action.
2. ‘Best Practice’ efficiency/effectiveness education
We will deliver multiple training sessions/workshops on best-practices covering modelling, spreadsheet risk mitigation, and using the Excel application more effectively and efficiently. This will also draw on ‘lessons learned’ from the above step. This is critical given that several of the spreadsheets prioritized for investigation in the preceding step were constructed within the last 6 months. This highlighting that we have an on-going issue, and not just a problem with legacy spreadsheets.
Sessions will cover –
- How recent versions of Excel have extended its capability from mere spread-sheeting into business intelligence.
- Data organisation: Many things in Excel go from challenging to easy simply by changing how source data is organized.
- Understanding how Excel calculates, and avoiding calculation bottlenecks and overhead by wisely choosing/using formulas for efficiency.
- Leveraging pivot tables for aggregation and reporting; and utilising Excel’s dynamic table functionality to cut down on complexity.
- Using advanced functionality to augment user interfaces by incorporating slicers, advanced filtering, VBA (aka macros), and dynamic SQL queries (for simplified data retrieval and processing) into spreadsheets/models.
- Conditional formatting and data validation techniques that can help to better validate user input.
- Tools (both free and commercial) that can help users to work more efficiently in the Excel environment.
- Troubleshooting and streamlining existing models that users have inherited.
- How users can get free help, advice and inputs from online resources and forums.
- Modelling/analysis best practices.
- Data visualization best practices.
- Spreadsheet development best practices, including case studies covering lessons learned from recent work.
Session attendees will also have access to associated hand-outs/workbooks with supporting content and links to further in-depth resources. Each session will be repeated multiple times on a rolling basis to facilitate/encourage maximum patronage. Managers will be encouraged to actively promote attendance at these sessions and potentially include them in staff development plans if appropriate.
3. Peer-to-peer user group/help forum.
We will set up and facilitate an internal Excel User Group – potentially with a supporting software forum/message board. This will be used to –
- Provide peer-to-peer learning and support opportunities. A wider peer-to-peer support network that extends beyond just our unit alone will provide much wider coverage than me alone can offer.
- Identify/evaluate mission-critical spreadsheets across our unit that potentially impose significant business risk and/or administrative burden, and provide owners with some options to restructure/re-engineer them accordingly.
- Provide attendees with more hands-on exposure to tools/tricks that they can use to re-engineer their own spreadsheets in need. Encouraging all users to utilize the skills of identified expert users on a consultancy basis for advice and peer review as needed – perhaps via the peer-to-peer-based user group outlined above.
4. Identification, creation, and dissemination of useful resources/templates
We will identify/create a useful repository of quality free training resources (including construction of additional in-house resources and ready-to-use macros/templates where warranted) – that supports further in-depth professional development and the same time reduces our dependency on paid external courses. This will draw heavily on the large amount of free resources and training materials available online that equal and in many cases surpass paid external training content in terms of providing learning outcomes.
We will publish a weekly ‘Productivity Hack’ article on the organizational intranet home page suitable for users of all levels. These articles may also reference a number of the outstanding productivity resources published each week on the Internet by the worldwide Excel Development/Training community (including blog posts, technical articles, training videos et cetera).
5. Evaluation of additional tools/external training opportunities
We will work with IT to evaluate new additions to Excel such as PowerPivot – a free extension to Excel that allows users can crunch, filter, and sort millions of records with very little overhead, as well as incorporate multiple data sources easily into PivotTable-based analysis, using an interface they are already familiar with. PowerPivot extends the capability of Excel to the point that it might reduce the need for some of the other apps we currently use to amalgamate discrete data sources, such as SAS.
We will also offer our assistance to Learning and Development to help them identify/rate any external training providers they will use going forwards.
Howdy folks. Jeff here, back from my summer holiday in the Coromandel Peninsula in the North Island of New Zealand, where I’ve been staring at this for the last 21 days:
For the next 344 I’ll be staring at this:
God, it’s good to be home.
A while back I answered this thread for someone wanting to identify any duplicate values found between 4 separate lists.
The way I understood the question, if something appears in each of the four lists, the Op wanted to know about it. If an item just appeared in 3 lists but not all 4, then they didn’t want it to be picked up. And the lists themselves might have duplicates within each list.
We can’t simply use Conditional Formatting, because that will include duplicate names that don’t appear in each and every column, such as ‘Mike’:
Rather, we only want names that appear in every column:
I wrote a routine that handled any number of lists, using two dictionaries and a bit of shuffling between them. And the routine allows users to select either a contiguous range if their lists are all in one block, or multiple non-contiguous ranges if they aren’t.
- The user gets prompted for the range where they want the identified duplicates to appear:
- Then they get prompted to select the first list. The items within that list get added to Dic_A. (If they select more than one columns, the following steps get executed automatically).
- Next they get prompted to select the 2nd list, at which point the code attempts to add each new item to Dic_A. If an item already exists in Dic_A then we know it’s a duplicate between lists, and so we add it to Dic_B. At the end of this, we clear Dic_A. Notice that any reference to selecting a contiguous range has been dropped from the InputBox:
- When they select the 3rd list, then it attempts to add each new item to Dic_B, and if an error occurs, then we know it’s a duplicate between lists, and so we add it to Dic_A. At the end of this, we clear Dic_B. We carry on in this manner until the user pushes Cancel (and notice now that the InputBox message tells them to push cancel when they’re done):
Pretty simple: just one input box, an intentional infinite loop, and two dictionaries that take turns holding the current list of dictionaries. Hours of fun.
Only problem is, I had forgotten to account for the fact that there might be duplicates within a list. The old code would have misinterpreted these duplicates as between-list duplicates, rather than within-list duplicates. The Op is probably completely unaware, and probably regularly bets the entire future of his country’s economy based on my bad code. Oops.
I’ve subsequently added another step where a 3rd dictionary is used to dedup the items in the list currently being processed. Here’s the revised code. My favorite line is the Do Until “Hell” = “Freezes Over” one.
Dim rngOutput As Range
Dim dic_A As Object
Dim dic_B As Object
Dim dic_Output As Object
Dim lng As Long
Dim lngRange As Long
Dim varItems As Variant
Dim strMessage As String
varItems = False
On Error Resume Next
Set varItems = Application.InputBox _
(Title:="Select Output cell", _
Prompt:="Where do you want the duplicates to be output?", Type:=8)
If Err.Number = 0 Then ‘user didn’t push cancel
On Error GoTo 0
Set rngOutput = varItems
Set dic_A = CreateObject("Scripting.Dictionary")
Set dic_B = CreateObject("Scripting.Dictionary")
Set dic_Output = CreateObject("Scripting.Dictionary")
lngRange = 1
Do Until "Hell" = "Freezes Over" ‘We only want to exit the loop once the user pushes Cancel,
‘ or if their initial selection was a 2D range
Select Case lngRange
Case 1: strMessage = vbNewLine & vbNewLine & "If your ranges form a contiguous block (i.e. the ranges are side-by-side), select the entire block."
Case 2: strMessage = ""
Case Else: strMessage = vbNewLine & vbNewLine & "If you have no more ranges to add, push Cancel"
End Select
varItems = Application.InputBox(Title:="Select " & lngRange & OrdinalSuffix(lngRange) & " range…", _
Prompt:="Select the " & lngRange & OrdinalSuffix(lngRange) & " range that you want to process." & strMessage, _
Type:=8)
If VarType(varItems) = vbBoolean Then
lngRange = lngRange – 1
If lngRange = 0 Then GoTo errhandler:
Exit Do
Else:
DuplicatesBetweenLists_AddToDictionary varItems, lngRange, dic_A, dic_B
If UBound(varItems, 2) > 1 Then
lngRange = lngRange – 1
Exit Do ‘Data is in a contigous block
End If
End If
Loop
‘Write any duplicate items back to the worksheet.
If lngRange Mod 2 = 0 Then
Set dic_Output = dic_B
Else: Set dic_Output = dic_A
End If
If dic_Output.Count > 0 Then
If dic_Output.Count < 65537 Then
rngOutput.Resize(dic_Output.Count) = Application.Transpose(dic_Output.Items)
Else
‘The dictionary is too big to transfer to the workheet
‘ because Application.Transfer can’t handle more than 65536 items.
‘ So we’ll transfer it to an appropriately oriented variant array,
‘ then transfer that array to the worksheet WITHOUT application.transpose
ReDim varOutput(1 To dic_Output.Count, 1 To 1)
For lng = 1 To dic_Output.Count
varOutput(lng, 1) = dic_Output.Item(lng)
Next lng
rngOutput.Resize(dic_Output.Count) = varOutput
End If ‘If dic_Output.Count < 65537 Then
Else:
MsgBox "There were no numbers common to all " & lngRange & " columns."
End If ‘If dic_Output.Count > 0 Then
End If ‘If VarType(varItems) <> vbBoolean Then ‘User didn’t cancel
‘Cleanup
Set dic_A = Nothing
Set dic_B = Nothing
Set dic_Output = Nothing
errhandler:
End Sub
Private Function DuplicatesBetweenLists_AddToDictionary(varItems As Variant, ByRef lngRange As Long, ByVal dic_A As Object, ByVal dic_B As Object)
Dim lng As Long
Dim dic_dedup As Object
Dim varItem As Variant
Dim lPass As Long
Set dic_dedup = CreateObject("Scripting.Dictionary")
For lPass = 1 To UBound(varItems, 2)
If lngRange = 1 Then
‘First Pass: Just add the items to dic_A
For lng = 1 To UBound(varItems)
If Not dic_A.exists(varItems(lng, 1)) Then dic_A.Add varItems(lng, 1), varItems(lng, 1)
Next
Else:
‘ Add items from current pass to dic_Dedup so we can get rid of any duplicates within the column.
‘ Without this step, the code further below would think that intra-column duplicates were in fact
‘ duplicates ACROSS the columns processed to date
For lng = 1 To UBound(varItems)
If Not dic_dedup.exists(varItems(lng, lPass)) Then dic_dedup.Add varItems(lng, lPass), varItems(lng, lPass)
Next
‘Find out which Dictionary currently contains our identified duplicate.
‘ This changes with each pass.
‘ * On the first pass, we add the first list to dic_A
‘ * On the 2nd pass, we attempt to add each new item to dic_A.
‘ If an item already exists in dic_A then we know it’s a duplicate
‘ between lists, and so we add it to dic_B.
‘ When we’ve processed that list, we clear dic_A
‘ * On the 3rd pass, we attempt to add each new item to dic_B,
‘ to see if it matches any of the duplicates already identified.
‘ If an item already exists in dic_B then we know it’s a duplicate
‘ across all the lists we’ve processed to date, and so we add it to dic_A.
‘ When we’ve processed that list, we clear dic_B
‘ * We keep on doing this until the user presses CANCEL.
If lngRange Mod 2 = 0 Then ‘dic_A currently contains any duplicate items we’ve found in our passes to date
‘Test if item appears in dic_A, and IF SO then add it to dic_B
For Each varItem In dic_dedup
If dic_A.exists(varItem) Then
If Not dic_B.exists(varItem) Then dic_B.Add varItem, varItem
End If
Next
dic_A.RemoveAll
dic_dedup.RemoveAll
Else ‘dic_B currently contains any duplicate items we’ve found in our passes to date
‘Test if item appear in dic_B, and IF SO then add it to dic_A
For Each varItem In dic_dedup
If dic_B.exists(varItem) Then
If Not dic_A.exists(varItem) Then dic_A.Add varItem, varItem
End If
Next
dic_B.RemoveAll
dic_dedup.RemoveAll
End If
End If
lngRange = lngRange + 1
Next
End Function
Function OrdinalSuffix(ByVal Num As Long) As String
‘Code from http://www.cpearson.com/excel/ordinal.aspx
Dim N As Long
Const cSfx = "stndrdthththththth" ‘ 2 char suffixes
N = Num Mod 100
If ((Abs(N) >= 10) And (Abs(N) <= 19)) _
Or ((Abs(N) Mod 10) = 0) Then
OrdinalSuffix = "th"
Else
OrdinalSuffix = Mid(cSfx, _
((Abs(N) Mod 10) * 2) – 1, 2)
End If
End Function