Clique no banner para conhecer e adquirir o meu treinamento de Bancos de Dados no Azure

Analysis Services - How to monitor the progress of cube processing by SQL Server

Post Views 419 views
Reading time 18 minutes

Hey guys!
In this article I want to share with you a tip that most people who work with Analysis Services don't know about this possibility, although they always wanted to have something like this, which is to monitor the progress of cube processing by SQL Server.

Most BI professionals these days just monitor how long it took to process the cube/database as a whole, but they don't know how long it took each dimension, each fact, and each cube within a database and this is what I'm going to show you how. do in this post.

The first thing we're going to do is open the SQL Server Profiler to monitor cube processing:

On the login screen, remember to change the connection type to “Analysis Services”, enter the name of the instance you are going to connect to and configure the authentication data

Remember that Analysis Services only accepts Windows authentication (on-premises Active Directory) or Azure Active Directory.

After connecting, you will be able to define a name for the monitoring and where it will be stored (bank table or physical file)

I usually save to disk and then create a reading routine via Job, just to not be writing data to the database all the time, but for traces coming from Analysis Services, the function ::fn_trace_gettable, which is used to read the generated files, does not work:

Msg 567, Level 16, State 9, Line 1
File ‘C:\temp\SSAS.trc’ is not a recognizable trace file.

So I recommend already configuring the monitoring to save the data in a database table directly.

The data export table is configured:

Now click on the “Event Selection” tab to select the events you will monitor.

Generally I mark only the “Error” and “End” events from the categories below:

  • Command Events
  • Errors and Warnings
  • Progress Reports
  • Queries Events
  • Query Processing

You can change these captured events depending on the level of detail you want to monitor.

By clicking on the “Column filters” button, you can apply filters like user name, software name, hostname, session ID, etc.

In the case below, I will apply a filter to return only queries made by my user. This is useful to avoid catching events from other users accessing the cube while you are processing and skewing the results.

I click on “Run” to start monitoring and now I start processing the cube.

You will see a screen like the one below, showing the monitoring result.

Although it's easy to visualize the data, I prefer to query through SQL Server, where I can group, add, filter and transform the data as needed.

Doing a basic query on the created table, we can already visualize a good part of the data:

But now we need to include descriptions of event classes and subclasses, to make the data easier to understand.

First, let's create 2 tables: ProfilerEventClass and ProfilerEventSubClass, which will store the descriptions of the events that occur during the processing of a cube and insert the most common types of events.

Script for creating tables and inserting data:

Now I can better interpret the results by joining the data from my monitoring table with these 2 created, according to the example below:

And it returns a table like this one:

Some important points to highlight about the returned data:

  • The “Duration” and “CPU Time” columns are measured in milliseconds
  • The “IntegerData” column returns the number of rows processed (only works in some events, such as ReadData)
  • The “SPID” column is the session ID of the running user. It can be used to isolate some specific execution, when there are several operations being executed at the same time on the server.
  • I did not include the “Begin” or “Current” events, only the “End” events, as it is only in this step that the “Duration” and “CPU Time” columns return data.
  • In “End” events, the “StartTime” column is when the operation started, and the “CurrentTime” column is when it ended
IMPORTANT: Enable this SQL Profiler feature only to debug specific cases. Do not leave this feature on unnecessarily, as it consumes more server resources, as well as disk space to store this data.

Pay close attention not to consume too much disk space with these logs and this ends up becoming a problem in the future.

And that's it, folks!
I hope you liked this tip and now you can monitor your cube processing.