Sunday, April 29, 2012

More Oracle documentation than you can shake a stick at.


Introduction

I love idiomatic speech – I think it meets some deeply felt need for trivia.  Or maybe miscommunication.  After all, if you don’t know what a given idiom means, well, we have a failure to communicate.  

That isn’t Oracle’s problem

I don’t know if you’ve been paying attention to Oracle’s various outreach programs to us, its partners and customers, but they are trying really hard to give us what we need to do our jobs.  Think about all of the things Support does, think about Oracle coming to KScope for their awesome symposium, Those acts of communications on Oracle’s part are nice, aren’t they?  And it sure beats being on a prison chain gang (go click on “failure to communicate”).

Where oh where has my documentation gone?

There are five ways to get to Oracle documentation, and four of them are update sites.  I don’t think Oracle would go through this trouble unless they wanted you to RT_M the documentation.  Would they?

The first one’s easy, and is the good old Oracle EPM Documentation Portal Page.  I have the Essbase link right there in my browser bookmarks toolbar so I can jump to it at any time.

Oh, big deal you say, you’ve been using it since the year dot.  Show me something new you say.  (Do you say this?  Please say yes or the next bits don’t make sense.)  New, you want new?  I’ll give (or more accurately, Oracle will give) you something new.

What’s new, pussycat?

How about getting updates to documentation from any one of four social media sites?  That’s right, Oracle doesn’t make you choose.  Take your pick, and get the latest.
Although I’m not a Twitter user, I personally like the way it presents documentation updates the best.  But as I wrote, you pays your nickel and you take your chances.

Reading is Fundamental

Okay, that’s enough obscure idiomatic (all linked so you can actually understand, sort of, this seriously great message about Oracle’s EPM documentation team) for one post.  

After all,
R.I.F., courtesy of Oracle.  Enjoy.

Friday, April 27, 2012

Tired of Traffic? Don't Blame The Trucks.


Driving to Hartford the other day (no, you cannot really get there by train) I saw a beautiful sight:  hundreds of trucks!  What surer sign of an economic recovery?
Yet, motorists hate trucks and mistakenly blame them for traffic congestion and accidents that cause hours of delays.
Readers of this column know I’m a “rail guy” and would love to see freight trains replace trucks, but that ain’t gonna happen anytime soon.  But as motorists we should not blame truckers for traffic woes of our own creation.
Check the facts and you’ll find most highway accidents are caused by motor cars, not the trucks drawn into the incidents.
Do trucks drive too fast?  Sure, but don’t we all?  Next time you’re on I-95 check who’s in the high-speed left lane and you’ll see cars, not trucks.
Should there be better safety inspections of trucks?  Absolutely!  But for every over-weight truck or over-worked truck driver there are doubtless hundreds of unsafe cars and equally road-weary warriors behind the wheel whose reckless disregard endangers us all.
Truckers drive for a living.  They are tested and licensed to far more rigorous standards than anyone else.  And because they drive hundreds of miles each day, overall I think they are far better drivers.  When’s the last time you saw a trucker juggling a cellphone and a latte like many soccer moms?
And remember… they’re not out there driving their big-rigs up and down the highway just to annoy us.  We put those trucks on the road by our voracious consumption patterns.  Every product we buy at stores large and small, including the very newspaper you hold in your hand, was delivered by trucks.  Want fewer trucks on the road?  Just stop buying stuff.
By definition, trucks are high-occupancy vehicles.  Compare the energy efficiency of a truck delivering its cargo to you in your “SOV” (single occupancy vehicle), even if it is a hybrid.  Only rail offers better fuel efficiency.
Why are trucks jamming our highways at rush hour?  Because selfish merchants require them to drive at those times to meet their delivery timetable.  If big-box stores and supermarkets only took truck deliveries in the overnight hours, our highways would flow must better at rush hour. 
Truckers must use the interstates while passenger cars can chose among many alternate routes.  Why is the average distance driven on I-95 in Connecticut just eleven miles?  Because most of us drive the ‘pike for local, not interstate trips.
If we were smart enough to “value price” our highways (ie return tolling) we’d see fewer vehicles of all kinds on I-95, and those that were willing to pay for the privilege of motoring there would get real value in a faster ride.
I’m hardly an apologist for the trucking lobby.  But neither is it fair for us to blame anyone but ourselves for highway safety and congestion.  It’s the SOV crowd, not the truckers, who are to blame.  Excessive speed and drinking cause most accidents, and the majority of accidents involve cars, not trucks.
Let’s be honest about this mess of our own making and stop trying to blame truckers as our scapegoat.  As the great philosopher Pogo once put it, “We have met the enemy and he is us!”

Sunday, April 22, 2012

Stupid Programming Tricks #12 -- How to trap for import errors in MaxL

Introduction

This is truly a Stupid Trick because I said something couldn’t be done and in fact it could.

Oh, the shame.  And then the shame of getting it slightly wrong yet again.  A-g-o-n-y.

Happily, this is not the first time in my life I have been wrong and will certainly not be the last, so my ego can take the blow.  What did Albert Einstein say about stupidity?  “Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.”   Al, here I am, proving that really, stupidity is infinite, especially if your name is Cameron.

Sigh.

But the great news is that you, oh Cameron’s Blog For Essbase Hackers readers, actually take (at least some of) what I write under consideration and review, and are kind enough to tell me where I’m wrong.  Yes, Tim Faitsch, I’m looking at your comment in my No Exit (but not in MaxL) post.  

What oh what did I write that was so bad?

Here it is:
Note that MaxL is not going to trap data or dimension load errors.  ‘Twould be nice if it could, but it doesn’t.  So it goes.

Tim quite correctly pointed out that statement simply isn’t true.  And if you (ahem) read the Tech Ref, it’s pretty darn obvious.  What was I thinking?  I think the answer is I wasn’t thinking because here it is in black and white:
Example: Iferror (MaxL Shell)
The following example script tests various errors including MaxL Shell errors, and demonstrates how you can set the exit status variable to a nonzero argument to return an exit status to the MaxL Shell.
###  Begin Script ###
login $1 $2;
echo "Testing syntactic errors...";
spool on to spool.out;
set timestampTypo on;
iferror 'End';
echo "Testing shell escape...";
shell "cat doesnotexist.txt";
iferror 'ShellError';
msh "doesnotexistlerr.mxl";
iferror 'FileDoesNotExistError';
echo "Script completed successfully...";
spool off;
logout;
exit 0;

The trick

Did you catch it?  The shell command does a cat (I believe a *nix catalog type statement) looking for the file “doesnotexist.txt”.  If cat (think type in Windows) cannot find the file there’s an error.  If cat does find the file, there’s no error.  Who cares, right?

But consider how you could apply this to an import statement which, when it has bad record or two or two thousand, writes the errors to an error file.  If you could test for the existence of that error file, you’d know that if one existed, that would be there are data load errors, and if you do not, there are no data load errors.  

So here’s the trick and it’s a logical switcheroo:  An error from the shell “dir youerrorfilenamerighthere.err” statement means that the statement worked with no import errors because the no error file exists (this is assuming that you clear out all error files on script start) and the absence of an error really means that an import error occurred because the dir command found the text file.  Yes, it’s opposite land, but only in a good way.

An example with an error that’s not my own

First off, let’s establish a data source that we know isn’t going to work unless The Beverage Company becomes The Beverage and Snack Company.  What is wrong with this data load file for Good Old Sample.Basic?
 Hmm, potato chips.  So good to eat, so bad for you.  Especially when it’s a Scenario.  So we know that record three will fail.

Here’s the code of my script cleverly called LoadDataWithErrorChecking.msh:
/*
Purpose:    Illustrate checking for error in import with good ol' Dir
Written by: Cameron Lackpour
Modified:   Right now
Notes:      
*/

/*    Log in to Essbase    */
login hypadmin password on localhost ;
iferror "BadLogin" ;

/*    Define STDOUT and STDERR outputs    */
spool stdout on to "c:\\Automation\\Sample\\MaxLLog\\mshSample.log" ;
iferror "BadLogFile" ;
spool stderr on to "c:\\Automation\\Sample\\MaxLLog\\mshSample.err" ;
iferror "BadErrorFile" ;

/*    Load data from SQL    */
import database 'Sample'.'Basic' data connect as "hypsql" identified by "hypsql" using server rules_file 'dData'
on error write to "c:\\automation\\sample\\maxllog\\dData.err" ;
iferror "SQLImportError" ;

/*    Test to see if dData.err exists.  If it *doesn't*,MaxL will throw an error.  But we know that's a GOOD thing because the absence of an error file means that there were no data load errors.  If DIR *does* find the file, then there was a data load error.  And so we have to treat it like an error.  It's sort of backwards, but works.  And a big tip of the Essbase Hacker's hat to Tim Faitch for pointing this out when I (embarassingly) completely didn't read it in the docs.  Doh!    */shell "dir dData.err" ;
iferror "ExitWithOutError" ;
/*    If the file was found, in fact there was an error.    */
goto "SQLImportError" ;

/*    Leave the MaxL shell    */
define label "ExitWithoutError" ;
/*    This is where the script exits if there is no error.    */
logout ;
exit 0 ;

/*    Create label for login errors    */
define label "BadLogin" ;
/*    Quit MaxL with a 10 error code    */
exit 10 ;

/*    Create label for log file errors    */
define label "BadLogFile" ;
/*    Quit MaxL with a 20 error code    */
exit 20 ;

/*    Create label for error file errors    */
define label "BadErrorFile" ;
/*    Quit MaxL with a 30 error code    */
exit 30 ;

/*    Create label for SQL import errors    */
define label "SQLImportError" ;
/*    Quit MaxL with a 40 error code    */
exit 40 ;

I am expecting an error level of 40 when I run the above code.  And so it is.


I could have controlling code that tests for the 40 return code and does things – emails, automated Very Light warnings, texts, whatever.

You will note that this line is just the Windows version of the Tech Ref documented line (I just substituted dir for cat and of course used the right file name):
shell "dir dData.err" ;

But what happens when the script and the error file aren’t in the same directory.  This is actually pretty likely, especially in automated environments where locations, launch directories, and everything else are in separate directories.

The documentation lets you down

Well, to be fair, it isn’t wrong, but it sort of leaves out a crucial bit of information.  When you run shell “dir yourerrorfilenamerighthere.err”, there is an assumption on MaxL’s part that the place you launch the MaxL script from and the location of the error file are one and the same.
But when they aren’t in the same directory the dir command can’t find the error file.  No big deal you say (Do you?  Really?  Oh good, it isn’t just craaaaaazy voices in my head.  Or are they?), I’ll simply add in a directory to make everything work, like this:  
shell "dir c:\\\Automation\\\Sample\\\MaxLLog\\\dData.err" ;

Or will it?  Let’s not even change the execution directory and see what happens:


Uh-oh.  We know there’s an error, but MaxL is telling us that there isn’t.

NOTE THE ABOVE ABOUT DOUBLE QUOTES IS NOT TRUE ALL THE TIME.  Thanks to Jason Jones (read the comments) pointing out that "cat" really equals "type" in Windows.  In fact when you run shell "type yourerrorfilenamerighthere.err" things WORK, even when the file is in another directory.  What did I write about stupidity?  Double sigh.

So what’s missing or more accurately, what’s not needed?

The crucial bit of information that is still true, is that at least on Windows 2008 R2, is that YOU DON’T NEED THE DOUBLE QUOTES.  You don’t just don’t need them, you don’t want them at all if using dir.  Get rid of the double quotes and all is sweetness and light.  When you use dir, that is.  Read below to see how type successfully works with " and ".  Why?  Well, they're different commands and the reason is buried in how the two tools use double quotes, I think.  The real reasons behind that different quote handling are buried in the bowels of Windows and beyond our scope.  Suffice to say, dir pukes on double quotes and type works just fine with it.  Continuing along...

So, if you're going to test with dir, this is all you need:
shell dir c:\\\Automation\\\Sample\\\MaxLLog\\\dData.err ;

By the way, shell dir c:\\Automation\\Sample\\MaxLLog\\dData.err ; is also good.

And does it work?  Oh yes.


And you can run it from anywhere now.  Note that I am now running this from c:\Users\Administrator.


In fact, you can simply delete the double quotes altogether and run it from the source directory if you are inclined to follow the Tech Ref:
shell dir dData.err ;



If I actually had a clue about *nix, I would know that this works

Just like in the docs:

You see?  Lovely double quotes around the shell target.




Doh!

The morals of my story of error

There are five:
  1. Never say never.  Especially when people likely smarter than you  read your work (guys like Tim and Jason).  Whoops.
  2. The Tech Ref is correct when it comes to using this technique in the same directory as the launched script, but incorrect when going outside of that directory if you use dir -- it's just fine if you use type.  Btw, the Tech Ref was also right in pointing out how to do this.  Too bad I wasn't smart enough to read it and understand.  What was I thinking?  The world will never know as I certainly don't.  Again and again and again.
  3. Double quotes are sometimes good, sometimes not.  In the case of dir they are truly unnecessary, at least on Windows.
  4. I can get obsessed by this stuff sometimes – be glad that you didn’t lose an afternoon (and then another evening correcting your errors) trying to figure out why the silly shell statement wouldn’t find the error file.
  5. Maybe I should spend more time reading up on that little known OS, *nix.  I think I got the catalog = directory message from my days, gasp, on an Apple IIe. 

One last thing to consider -- it's a little weird that type handles double quotes and dir does not when passing explicitly named directories.  It would be nice if they both worked the same way.

One last, last thing to consider.  It's really gratifying that people read my posts, correct my errors, and actually care enough to get my mistakes through my thick skull so that you, dear reader, get accurate information in spite of my best efforts to confuse all and sundry.  :)  Both Tim and Jason get my thanks.

Friday, April 20, 2012

Trentino stage 4

Giro del trentino wrapped today with a quite panoramic stage which took us from the shores of largo d garda and to the summit of passo pordoi in the heart of the dolomites. I was super motivated for today as for a nice change I knew pretty much all of the route from my days on training camp with liquigas last season at san pelligrino. My plan was to go in the break which unfortunately was the plan of a lot of other riders aswell. That meant attack after attack after attack and after 80km of racing, 50km/ph average speed and having climbed 500m of altitude since the gun had gone off, 2 riders snuck of the front and a at this point decimated peleton happily took a nature break and a breather. The main field as so often happens on a balistic start like today reformed which makes everybody body wonder why we made it so hard for ourselves in the first place and the colnago team started tapping out there very gentlemanly tempo for the next 80km before we hit the vally rd before passo pordoi. So all was quite peacefull in the field until about 10km from the base of the final climb when liquigas swarmed onto the front and drilled. Not to worry I knew what they were up to and how they would do it and started to ease my way forward through what was now becoming a tired peleton. Unfortunately when people start to struggle they make small mistakes and on a harmless uphill dragging section of rd somone near the front clipped somone else's tyre and caused a massive pile up. With riders on the ground all around me I had not choice but to career straight into the carnage and hope for the best. This resulted in me stopping very suddenly, flying over the handle bars and proceeding to do a perfect summersalt quite conveniently onto another rider, landing squarely on my tailbone and head! I feared the worse as I flew throught he air but to my apsolute amazement I got up with nothing more than a groggy head, found my bike, even picked up my bottle which somehow came to rest between my legs, put my chain back on, and re mounted my bike. Only problem was by this stage the race was long gone up the rd and so close to the finish there is not charity wait for the crashers!!! Like there can be in other circumstances. So gruppeto for me today. I would have really liked to push hard on the final climb as I knew it so well and am interested to see my condition but adequate in cycling does not really permit you to attack the gruppeto when the race is already 5minutes up the rd. So non event day in the end. At the finish I was still a little dazed and when I reached into my pockets I realised my ventoline was in 100pieces. I have crashed a lot with that in my pocked and never broken one so maby it was the slight cushion I needed to avoid any serious injury. Regardless I am just happy to have got through this beautiful race, looking forward to some rest and then of to the USA where I will be racing the tour of gila. A lot of work in the bank so to speak now so am looking forward to seeing what I can do there.

First some sleep, a couple more days in gavirate then on the plane stateside monday morning.

The adventure roles on!

Cjw

Sent from my BlackBerry® wireless device

Thursday, April 19, 2012

Stage 3 trentino

Today we started with a clear objective, to defend the points jersey of our german sprinter mattias friedaman. We didn't, so that was very disappointing. The stage started with a threatening 7km climb which was going to make controlling who was in the break hard. For me the easiest way to control this is to sit on the front and ride at a tempo fast enought to try and dis courage to many dudes from attacking. This was working nicely and at the top 5 guys had about 5seconds on us so easy to pull back on the flat and launch mattias across. Unfortunately we set a little to higher pace and mattias has dropped from the field and we had to wait, In this time the group very quickly took 2min and no we had to chase it down with 70km to contest the intermediate sprints. So chase full gas we did with the 5 guys we had to work and got it down to 50 seconds but did not close it intime and the sprint and jersey was lost so big bugger but atleast we tried and failed as appose to not trying at all and in the process had covered the first 100km in 2hrs so we were not mucking around. Only problem was now we had 2 climbs to get our empty tanks over in the final 60km with the final climb of 10km averaging 15% so surfice to say there ended up being an equal amount of suffering for the remainder of the race all be it on a very empty tank. Oh the perks of the world of professional cycling!! Anyways one great thing about stage races is that until the final stage is run and won there is always another day to re focus on so that's what now excites me, more mountains and foul weather, they are even forcasting snow, can't wait to get soaking wet and get stuck into it, last chance to salvage somthing more than some great training km's from the 2012 giro del trentino.



Cjw

Sent from my BlackBerry® wireless device

Wednesday, April 18, 2012

Giro del trentino

Giro del trentino is underway for 2012 and myself and my champion system team are here in the beautiful trento region of italy ready to rock and role. Yesterday was what i like to call the men's 8+ which in cycling terms is a team time trial. The secret to these is very much like rowing, team work. That mean encouraging, looking out for each other and most importantly respecting each others strengths and weaknesses. Do all this and you produce a nice smooth team TT. We did just that. To be able to maintain a paceline, that being nice even pulls on the front and not having guys changing every 2 seconds which inturn illuminates recovery. We aimed for 8-10sec pulls, more if you were fresh and had the legs for it, which give the rider changing from front to the back plenty of time to get back in line, take some deep breaths and get ready to storm through again. The final catch is you need to finish with 5 guys so there enlies the importance of going the teams speed. I love these days so while my experience is limited I take on the role of back patter and doing what I can to keep spirits high, just like I enjoyed so much in rowing.

The result for us was a great ride, led from a technical stand point from our most experienced man in the area, craig lewis, we were smooth from the gun. We traded really good even pulls early on and in the final part those of us with a little more in the tank pulled for a little longer. We achieved our goal of not finishing last, we are not nieve and understand on the whole we are the weakest team here and also did not have all the neccessary equipment for a blistering TT. Our disk and super fast wheels which are crucial in such a discipline are locked in customs!! One of the bonus's of having stuff sent from china!! So we made do with what we and did a really good job and it was certainly the best display of team work within this team I have been appart of since we were in taiwan, I great day had by all.

As for today, well was a huge success with mattias freidaman, whom is our german sprinter shooting up the rd in the breakaway and taking the sprint jersey. The rest of the day for us climbers was not so great, we had hoped to set up chris butler, our demon climber for the stage but we got a bit out of sorts on the decent so nothing really happened! Anyway good news is we now have a jersey to defend so I can't wait for tomorrow.

Good night

Cjw

Sent from my BlackBerry® wireless device

Tuesday, April 10, 2012

Stupid Planning queries #9 - Entity dimension

Introduction

Oh dear, I seem to have dropped off with my Stupid Planning Tricks.  The thing is, I’ve got a gazillion (roughly, there might be some slight exaggeration in that count) of these queries and I simply don’t remember which ones I’ve shared and which ones I haven’t.  I tend to get distracted with other technologies and projects…this is all a way of saying I’m a bit scatter-brained.

Enough apology, here’s the code, in case you haven’t been able to figure this out from other queries I’ve given away, to pull the Entity dimension from your Planning application.  

Some interesting columns

Here’s HSP_ENTITY in SQL Server’s table design mode – don’t worry about the brand of SQL – the table is always the same.

ENTITY_ID is simple – that’s the name of the OBJECT_ID value in Entity.  How exactly you could have a NULL OBJECT_ID is beyond my understanding of Planning as each and every object in Planning has a value, but my SQL brain is very small, so I’m not going to worry about it.  You shouldn’t worry either – Planning sets the value, not you.

DEFAULT_CURRENCY makes sense if you are using Planning’s multicurrency functionality.  If you are working in a single currency application, this field is NULL.  It’s easy (as we will see) to go grab the currency descriptions from HSP_CURRENCY.

USED_IN is kind of an interesting field.  It is described as a bit masked value, showing what Plan Type the Entity is used in.  A bit mapped value is simply an index of the values (in the case of this field) created by either the individual values or the adding together of the values.  Huh?  Actually, this is really, really easy.  Think about Classic Planning’s dimension editor for the Entity dimension.  Now think further about the rightmost field.  Do you remember those numbers?  

The above is a full three Plan Type Planning application with Workforce and Capex thrown in for giggles.  Do you see the 1, 2,4, 8, and 16?  Those are the values that make up the bitmap.  Let’s take a look at what HSP_ENTITY looks like:

There they are.  Wait, what’s that 31 value?  Well, 1+2+4+8+16 = 31.  And yes, that 31 equates to No Entity sitting in all five Plan Types.  If we had one that was in Plan Type 1 and 8, it would have a bitmap value of 9 as in Entity6:


Ta da – proof that yr. obdnt. srvnt. can add simple numbers together.

The thing that’s nice is that you can use USED_IN with HSP_PLAN_TYPE to get the descriptions of the Plan Types:


EMPLOYEE_ID and REQUISITION_NO have the respective descriptions of “Id for an employee” and “Requisition number for a to-be-hired”.  Hmm, those sound as if they are for Workforce Planning.  Know that if you don’t have Workforce, these two fields are NULL.  (Actually, I can’t seem to value these fields at all.  I must be doing something wrong so figuring this out is on The List of Things to Figure Out and thus a subject for another blog post, someday.)

ENTITY_TYPE is another one of those odd ones.  I can see that it’s there.  I can see there’s a description in the schema:  
0=none (default)
1=employee
2=department-general
3=TBH-input

But I’ve never actually seen the values change from anything other than 0.  Again, it looks like this field is used in Workforce.  I shall have to experiment for a later blog post.

Putting it all together

So let’s write a query that takes into account all of the above.  
--    Purpose:    Illustrate the querying of a custom (Entity) dimension
--    Modified:    7 April 2012, Cameron Lackpour
--    Notes:        Unique to Entity are Currency, Entity Type, and Plan Type
--               
SELECT
    PO.OBJECT_NAME AS 'Parent',
    O.OBJECT_NAME AS 'Child',
    -- Use a SQL Subquery to get aliases.
    -- NB --  The overall SELECT from HSP_MEMBER ensures that members with
    --        and without an alias are selected.
    --        ISNULL puts in zero length string in place of NULL
    ISNULL((SELECT OA.OBJECT_NAME
        FROM HSP_ALIAS A
        INNER JOIN HSP_OBJECT OA
        ON A.MEMBER_ID = O.OBJECT_ID AND
        OA.OBJECT_ID = A.ALIAS_ID), '') AS 'Alias',
    --    Use a SQL subquery to get currency tags
    ISNULL((SELECT C.SYMBOL
        FROM HSP_CURRENCY C
        INNER JOIN HSP_ENTITY EN
        ON C.CURRENCY_ID = EN.DEFAULT_CURRENCY AND
        O.OBJECT_ID = EN.ENTITY_ID), '') AS 'Currency',
    --    Remember, there can be up to five plan types per Planning
    --    application, so CONSOL_OP1 through CONSOL_OP5 are valid.
    CASE M.CONSOL_OP1
        WHEN 0 THEN '+'
        WHEN 1 THEN '-'
        WHEN 2 THEN '*'
        WHEN 3 THEN '/'
        WHEN 4 THEN '%'
        WHEN 5 THEN '~'
        WHEN 6 THEN '^'
    END AS 'Operator',
    CASE M.DATA_STORAGE
        WHEN 0 THEN 'Store Data'
        WHEN 1 THEN 'Never Share'
        WHEN 2 THEN 'Label Only'
        WHEN 3 THEN 'Shared Member'
        WHEN 4 THEN 'Dynamic Calc and Store'
        WHEN 5 THEN 'Dynamic'
    END AS 'Storage',
    CASE M.TWOPASS_CALC
        WHEN 0 THEN 'No'
        WHEN 1 THEN 'Yes'
    END AS 'Two Pass',
    --    Change the syntax of the CASE statement to support testing for NULL
    --    Dave's Unofficial Guide has DATA_TYPE = 0 as Unspecified, but in
    --    11.1.2 that isn't the case.
    CASE
        WHEN M.DATA_TYPE IS NULL THEN 'Unspecified'
        WHEN M.DATA_TYPE = 1 THEN 'Currency'
        WHEN M.DATA_TYPE = 2 THEN 'Non Currency'
        WHEN M.DATA_TYPE = 3 THEN 'Percentage'
        WHEN M.DATA_TYPE = 4 THEN 'SmartList'
        WHEN M.DATA_TYPE = 5 THEN 'Date'
        WHEN M.DATA_TYPE = 6 THEN 'Text'
    END AS 'Data Type',
    --    As far as I can tell, this isn't used in Planning, but is still
    --    there.  Why?
    CASE
        WHEN M.USED_FOR_CONSOL = 0 THEN 'False'
        WHEN M.USED_FOR_CONSOL = 1 THEN 'True'
        ELSE 'Undefined'
    END AS 'Used for consol',
    ISNULL(F.FORMULA, '') AS 'Formula',
    --    The subquery will puke if it returns more than one
    --    UDA, so do string concateenation using FOR XML PATH
    ISNULL((STUFF((SELECT ',' + U.UDA_VALUE
        FROM HSP_MEMBER_TO_UDA MU
        INNER JOIN HSP_UDA U ON
            MU.UDA_ID = U.UDA_ID AND
            MU.MEMBER_ID = O.OBJECT_ID
             FOR XML PATH ('')),1,1,'')), '') AS 'UDAs',
    ISNULL((STUFF((SELECT '; ' +
        OAN.OBJECT_NAME + ': ' + OAV.OBJECT_NAME   
    FROM HSP_MEMBER_TO_ATTRIBUTE M1
    INNER JOIN HSP_OBJECT OAN
        ON M1.ATTR_ID = OAN.OBJECT_ID AND
        M1.MEMBER_ID    = O.OBJECT_ID
    INNER JOIN HSP_OBJECT OAV
        ON    M1.ATTR_MEM_ID = OAV.OBJECT_ID
    INNER JOIN HSP_MEMBER MB
        ON O.OBJECT_ID =  MB.MEMBER_ID
    FOR XML PATH ('')),1,1,'')), '') AS 'Attribute assignments',
    CASE
        WHEN E.ENTITY_TYPE = 0 THEN 'None'
        WHEN E.ENTITY_TYPE = 1 THEN 'Employee'
        WHEN E.ENTITY_TYPE = 2 THEN 'department-general'
        WHEN E.ENTITY_TYPE = 3 THEN 'TBH-input'
    END AS 'Entity Type',
    E.USED_IN AS 'PT #',
    --    Use a SQL subquery to get Plan Type tags
    ISNULL((SELECT P.TYPE_NAME
        FROM HSP_PLAN_TYPE P
        INNER JOIN HSP_ENTITY EN
        ON P.PLAN_TYPE = EN.USED_IN AND
        O.OBJECT_ID = EN.ENTITY_ID), '') AS 'PT Name'
FROM HSP_MEMBER M
INNER JOIN HSP_OBJECT O
    ON M.MEMBER_ID = O.OBJECT_ID
INNER JOIN HSP_OBJECT PO
    ON O.PARENT_ID = PO.OBJECT_ID
INNER JOIN HSP_ENTITY E
    ON M.MEMBER_ID = E.ENTITY_ID
LEFT OUTER JOIN HSP_MEMBER_FORMULA F ON
    M.MEMBER_ID = F.MEMBER_ID
-- Entity dimension is ALWAYS DIM_ID 33, but its name
-- can vary, so use the ID
WHERE M.DIM_ID = 33

What does this look like?

How about that multiple Plan Type example?

Unsurprisingly, when an Entity is shared across Plan Types, and the bitmap doesn’t correspond to a PLAN_TYPE value, the above code can’t identify the Plan Type description.  I leave it to to you, loyal reader, to figure out how to derive the name.  (Hint – it can’t be that hard – use the bitmap, simple addition, and test with CASE.)

Almost done

I think I’ve written almost every dimension there is.  I have new, exciting, Dodeca-based stuff I want to write about, so there will likely be one more post on Planning dimensions and then I may run away from Planning’s tables for a bit.  I hope you’ve enjoyed this (almost complete) series.

Popular Posts