I'll not use the data column so I'm not sure my opinion counts for much here, but 16-bit is the generally accepted linguistic shortcut for mp3 files. People are going to expect that they are 16 bit because that's what they've always been referred to as (16/44.1). So I agree that showing 16-bit, while maybe not technically correct, is traditionally correct.
By the way, Audacity opens files for editing in 32 bit float as a default. It's not a reflection of the bit depth of the file itself, it's the mode in which Audacity is allowing the file to be edited for minimum dynamic range loss. Bit depth is related to dynamic range. Sample rate is related to frequencies. That's why no human being can detect the difference between 16 bit and 24 bit without deafening themselves in the process (you'd need to turn it up inhumanly loud for the difference to reveal itself), but a handful of human beings (that can hear above 20,000) can tell the difference between 44.1khz and 96khz.
A smile is happiness you'll find right under your nose.