Quantcast
Channel: Power Query Archives –
Viewing all 111 articles
Browse latest View live

Get full Time Activity data from QuickBooks into Power BI

$
0
0

Problem

As per the time of writing, the native QuickBooks connector in Power BI has some shortcomings for the Time Activity-data: It will not return employee details (so you will not know who did the hours) and it will not return hours (if they haven’t been entered by start- and end-date).

Solution

But fortunately the connector has 2 functions, who can return the full data that the QBO-API has to offer. At the end of the list in the navigation pane there are the functions “Entity” and “Report”:

QuickBooks Time Activity Navigation Pane

Choose “Entity” and a window will pop up where you enter “select * from TimeActivity” into the first field (“query”):

QuickBooks Function Dialogue

Click invoke and you might be prompted with the sign-in-dialogue. Sign in and continue will return a table like this:

QuickBooks Time Activity Result Table with Records

Click the expand-arrows (1) -> disable prefixes (2) -> Load more (3) -> Select All Columns -> OK -> dance the happy dance:

QuickBooks Results Time Activity

Employee data in a record that you can expand as you like:

QuickBooks Time Activity Employee data

And at the end of the table you’ll find the hours and minutes for the entries who didn’t use start- and end date:

QuickBooks Hours

Bonus

According to the manual, you can also pass parameters to this function call. I’ve created a handy function with an optional parameter that allows you to pass in a date after which the time entries shall be returned. But you can use the function without this parameter as well and return all time entries by just adding open and close parenthesis like so: NameOfTheFunction()

This function also calculates the total duration from the 2 different capture methods.

Enjoy & stay queryious 😉

The post Get full Time Activity data from QuickBooks into Power BI appeared first on The BIccountant.


Writing data to GitHub using Power Query only

$
0
0

You shouldn’t do it …

Generally it’s a very bad idea to execute commands in Power Query that write out data to a source, as these commands might be executed multiple times (https://blog.crossjoin.co.uk/2018/10/07/power-query-write-data/)

… unless … maybe ?

However, as with most good rules, there are exceptions. I leave it to you to decide whether my use case here is a valid candidate for it. It doesn’t execute the code twice, because I execute the query only from the query editor and none of the other queries is referencing its results. But please see for yourself – Writing data to GitHub using just Power Query:

The video

In the video I show how I enrich my M-functions with metadata before loading it directly with Power Query into a new Gist on GitHub. Then I trigger an automatic update of my Function-library (M-extension). Therefore I have to switch to Power BI, because it currently not possible to run R- or Python-Scripts in Excel (which writes the .mez-file for me into the destination-folder).

Trading code for votes

The code I’m sharing today is the one that exports the M-code to GitHub. I’m going to share the full solution, as soon as the following features are implemented in Excel (like they are in Power Query for Power BI currently):

You can help this by upvoting the ideas of the links above. Actually, my guess is that we need around 1000 votes for these features to be considered. So please share this article with your colleagues and friends to make this happen.

The code

https://gist.github.com/ImkeF/7202ba50867377988719f2c3492931f7

You need an access token from GitHub for Power Query to pull your data from your repos and gists: https://github.com/settings/tokens

Why I am so passionate about this?

In my eyes, these features hold the key to make the Power Tools in Excel really easy and efficient to use:

Thanks for your votes, enjoy & stay queryious 😉

The post Writing data to GitHub using Power Query only appeared first on The BIccountant.

Screenshot tutorial: Add a column with custom function code in Power Query

$
0
0

The following steps show how to create a new column in a table using existing custom function code. This works in Power BI as well as in Power Query in Excel:

1. Copy the function code

1: Copy the function code

2. Create new blank query in the query editor

2: Create new query

3. Open the advanced editor

3: Open the advanced editor

4. Replace existing function code by new one

4: Paste new function code

5. Rename query (optional)

5: Rename query (optional)

6. Invoke custom function

6: Add a new column with “Invoke Custom Function” command

7. Edit function parameters

7: Edit function parameters

Enjoy & stay queryious 😉

The post Screenshot tutorial: Add a column with custom function code in Power Query appeared first on The BIccountant.

Types in PowerQuery cannot be compared and a Type.AsText function

$
0
0

Sometimes I need to retrieve the textual representation of a type in Power Query and I’m using a fairly verbose function that I’ve stealed ages ago (I believe it was here: https://ssbi-blog.de/technical-topics-english/power-query-editor-using-notepad/ ) :

Although I don’t know how this function will be evaluated, I suspect it checks for every type from the beginning until the matching type is found. So I was tinkering with a potentially faster solution that is based on a merge of tables: Create a table with types:

Lookup Table for Types

and merge it on the type itself with the types in my original table:

Main Table with Types that need textual representation

let
     Source = #shared,
     Functions = Record.ToTable(Source),
     #"Added Custom" = Table.AddColumn(Functions, "Type", each Value.Type([Value])),
     #"Kept Last Rows" = Table.LastN(#"Added Custom", 1),
     Types = Record.ToTable([table = type table, function = type function]),
     #"Merged Queries" = Table.NestedJoin(#"Kept Last Rows",{"Type"},Types,{"Value"},"Types",JoinKind.LeftOuter),
     #"Expanded Types" = Table.ExpandTableColumn(#"Merged Queries", "Types", {"Name"}, {"Name.1"})
 in
     #"Expanded Types"

But this function will not return any matches. I also tried out a (potentially) slower version using Table.SelectColumns(Types, each [Value] = x[Types]) – but still no match. 

What I found particularly frustrating here was, that in some cases, these lookups or filters on type-columns worked. This for example:

let
     Source = #shared,
     Functions = Record.ToTable(Source),
     #"Added Custom" = Table.AddColumn(Functions, "Type", each Value.Type([Value])),
     #"Kept Last Rows" = Table.LastN(#"Added Custom", 1),
     Types = Record.ToTable([table = type table, function = type function]),
     #"Merged Queries" = Table.NestedJoin(Types, {"Value"}, Types, {"Value"}, "Types", JoinKind.LeftOuter),
     #"Expanded Types" = Table.ExpandTableColumn(#"Merged Queries", "Types", {"Name"}, {"Name.1"})
 in
     #"Expanded Types"

As it turns out, M is not equipped to compare types: https://social.technet.microsoft.com/Forums/en-US/6f2f0336-e57e-477f-a40e-5ffc9f0ca7be/type-equivalence?forum=powerquery 

“Type equivalence is not defined in M. Any two type values that are compared for equality may or may not return true.”

This is exactly what I was experiencing: Some comparisons worked and some didn’t.

Although this is not an everyday use case, I hope this blogpost will prevent you (and me) to run into this trap (again).

Enjoy & stay queryious 😉

The post Types in PowerQuery cannot be compared and a Type.AsText function appeared first on The BIccountant.

Bulk-extract Power Query M-code from multiple Excel files at once

$
0
0

Some time ago I published a function that extracts all M-code from Power BI (.pbix)-files. Today I publish the pendant to Bulk-extract Power Query M-code from multiple Excel-files at once. The code contains many elements from the before mentioned, so please refer to that article for reference.

How to use

The function below has just one parameter where you either fill in a full filename (incl. path) of an Excel file, or a folder path where multiple files reside. The function will automatically detect the right modus and spit out the M-code.

If you want to analyse code from multiple Excel-files that don’t sit in one folder, you just create a table with a column that hold the full filenames (one in each row). Then create a new column that where you call the function that references the first column (this blogpost shows how to do it ).

You don’t have to edit or unzip the files in advance, this function will do all of that automagically.

The function code

This function is particularly useful if you want to compare code or audit your Excel-files.

Enjoy and stay queryious 😉

The post Bulk-extract Power Query M-code from multiple Excel files at once appeared first on The BIccountant.

Part 2: Automatically validate E-mail attachments with Flow and Power BI

$
0
0

In Part 1 of this little series I described the core-Flow on how to automatically validate E-mail attachments with Flow and Power BI. It automatically sends an e-mail to a business partner who sent an attachment, that didn’t meet the agreed specifications:

Automatically validate e-mail attachments – Part1

But before going live with this Flow, you should consider the following aspects:

Caveats

Refresh limitations in Power BI Service

If you run on a Power BI Pro-license, the refresh can only be triggered 8 times a day (and 48 times with Premium). So it might not be a good idea to use this Flow for multiple senders or events.

The “When a data driven alert is triggered”-step will be ignored

Yes: That steps returns true if the last refresh of the dataset has triggered an alert and false if not. BUT: You need an action to check that value and determine what shall be done in each case. A condition will do that:

Condition to check the result of the data driven alert

The data driven alert might show the status from the previous refresh

After the refresh of the dataset has been triggered, the data refresh might take while. But the check for the data driven alert will start immediately after the previous step. So it will actually most likely be checked before the dataset has been refreshed. So I need a step that checks if the refresh has been finished. Therefore I can call the Power BI API again, using the endpoint that returns the refresh history of the dataset. But if this tells me that the refresh is not finished yet, I want to wait some minutes and ask again … until the status says “Completed”.

Therefore I’m using a “Do until”-control:

Wait for the dataset to be refreshed

The actions in this module will be executed until the condition (Status if the last refresh is equal to “Completed”) is matched. To fetch the value for the status from the output of the API call, I have to navigate to it according to the structure of the returned JSON:

Refresh History Output

In Flow, the square brackets are used to select fields from a record as well as positions within an array, to these commands select the Body from the Output, then grab the value field from the outer record, select the first (and only) item from the list and then select the status field from the inner record:

actionOutputs(‘Returns_the_refresh_history_of_the_specified_dataset_from_specified_workspace’)[‘body‘][‘value‘][0][‘status‘]

(So Flow uses a zero-based-indexing as well).

I don’t want to remove the checked files from my folder manually

So I have to add some actions that move the file(s) for me:

Move files from Transit-folder to destination-folder

So first I have to check which files are in the folder (“List files in folder”) and for each item found (“value”), I move them to my destination folder. (If there is a risk, that you receive 2 or more files with the same name in one email, you might want to replace the “Received Time” by the actual time).

Now the Flow is ready to flow and I hope you enjoyed this sample of how Flow and Power BI together can automate your business processes.

The full Flow in all its glory

Want more?

Or are you even in the Flow-fever already and think: “Well, I’d actually like to attach a table with all faulty rows back to the sender, so that he can check more easily what went wrong.” ? Then make sure to tune in for the next article where I will cover exactly that

Enjoy and stay queryious 😉

The post Part 2: Automatically validate E-mail attachments with Flow and Power BI appeared first on The BIccountant.

Export data from Power BI using Microsoft Flow

$
0
0

In my last 2 posts I’ve described a way to automatically validate attachments from incoming E-mails. Microsoft Flow would watch for incoming E-mails that match certain criteria and move their attachments to a dedicated folder. Then it would trigger a refresh of a Power BI dataset that has been designed to check for errors in those attachments. Data driven alerts in Power BI would indicate that there are certain errors and trigger a Flow that would send an E-mail back to the sender, informing him that his attachments didn’t meet the agreed criteria.

In this article I will now explain how not just a trigger about the existence of a faulty attachment could be passed back to Flow, but also the corresponding data itself. Therefore I write a query that exports data from Power BI to Flow. But watch out: This is not suitable for very big tables. I experienced timeouts at tables with 300k rows already.

Send data from Power BI to Microsoft Flow

As Chris Webb described in this article, Power Query can create POST requests to a webservice, thereby passing values in the body of the call to the webservice. This allows to export the data from Power BI. With Flow, it is very easy to setup a webservice: Just create a Flow whose trigger is a “When a HTTP request is received” (no further inputs in that step) and add a “Compose”-action with the Body-element in it. Then save the Flow and copy the generated URL:

Setup webservice in Flow that listens for incoming data (export from Power BI)

Power BI

In Power BI, you perform the following steps:

  1. Transform your error-table into a list of records (as that is the format of a JSON-array that Flow likes):
    • row 3: Table.ToRecords(<YourTableName>)
  2. Transform that into a JSON-binary that the Web.Contents-function can digest:
    • row 4: Json.FromValue(<YourListOfRecordsFromAbove>)
  3. Make the Web-call to the copied URL (use Anonymus credentials):
    • row 5-7: Web.Contents(<PasteTheURLFromFlowHere>, [Content=<JsonFromAbove>])
  4. Parse the result and load this query to the datamodel (this is very important, because otherwise the WebCall wouldn’t be made if the dataset is refreshed!):
    • row 8: Lines.FromBinary(<WebCallFromAbove>)

Code to export a table from Power BI to Flow

Back in Flow

Modify the “Compose”-step to parse out the table from the JSON like this:

Next step is to create a csv table:

And export it to OneDrive:

Save csv to OneDrive for Business

I’ve passed the FolderPath and FileName dynamically from Power BI here, but that’s not necessary. Just make sure that the folder is different than the one that contains the original attachments. A trigger will be set on this folder that sends the e-mail back to the sender. So the saving of the original attachment mustn’t trigger this return-email back to the sender already.

Now this sub-flow is completed. In the next steps you will adjust the main flow (like described in the previous post) to attach the generated file to the email.

Adjust main Flow

Task is to attach the stored csv-file to the email that will be sent out to the sender of the attachment. There are a couple of new steps (red) and an adjustment in the “Send email”-step:

Adjustments for the return-attachment

As the query, which calls the web-service will probably be executed twice, I’ve added a wait-step. The time has to fit to the table length, so you might test this for your specific file behaviour. Then the content from the saved file has to be fetched and added in the attachment:

Adjustments in main Flow: Attach e-mail

The last 2 new steps in the flow are similar to the previous ones and simply delete the files from the “return-attachment”-folder. Of course, if you want to keep them in a folder somewhere, you can instead implement a step that moves the file over there, instead of deleting.

Outlook

As soon as Power Query will be fully integrated in Flow, these tasks don’t need any Power BI-involvement any more: You can do the validations directly in the editor there. Although: Given the current lack of support to parse csv-files in Flow (and the terrible performance of the current workaround), I’m wondering if it wouldn’t be better, if the Power Query connector in Flow would save it’s results to csv-files instead of pushing it back to Flow (a bit like the new dataflows are doing it now). This would allow for mass-data-transformations and forwarding its results, without congesting the Flow-service.

What are your thoughts on this?

Enjoy & stay queryious 😉

The post Export data from Power BI using Microsoft Flow appeared first on The BIccountant.

How to get more out of your Graph API custom connector in Power BI

$
0
0

The Graph API can deliver a huge amount of interesting data from your Microsoft 365-universe, but the Graph API custom connector for Power BI is not able to retrieve everything from it in its current shape. So I’ve modified it a bit to squeeze out a bit more of its sweet juice.

Problem

When trying to get the details for planner tasks, the following error-message appears:

Error in Graph API custom connector when retrieving details from planner tasks

Solution

Add another function to the connector that uses the Web.Contents-function instead of the OData.Feed-function.

New function for the custom connector to use Web.Contents instead

Make sure to include the metadata-record from row 48. As without it, the Data Source Kind definition-record, which provides the authentication-details, would be ignored.

To make this work, you have to create another DataSource.Kind-record, as you can only use the same record for functions that have the same required function parameters. And this is not the case here, as the new function requires a URL, while the first function doesn’t require any arguments at all.

Also you need a second Publish-record if you want this function to appear as a separate connector in the UI:

Separate Publish-To-UI-record

 

This feels odd, doesn’t it?

If you haven’t done much with custom connectors yet and are halfway fluent in the M-language (but no other programming languages), you might wonder how on earth that record from row 48 delivers its content to the following function. Also, you might ask yourself how/why the content of the MyGraph2-DataSource.Kind-record can be retrieved by putting the variable name into “”, instead of using it like you’d do in Power Query so: DataSource.Kind=MyGraph2.

But if you haven’t, maybe you have instead fiddled around with the functions fancy variable name that contains a “.” and changed its first part to something nicer.

All this might trigger the moment where you realize that some other language must be involved here. And exploring the files in the .mez-folder gives a hint on what that could be:

Could some C# be involved here ?

I haven’t found any documentation about this yet, but if anyone has, please send link and I will include that here.

Edit 17th April 20:15 UTC: There is no other language involved. It’s all M. Will update once I can explain how this works.

How to use

You have to provide a valid URL as the only parameter to this new function and it can easily be retrieved from the Graph-Explorer

Create URL by choosing the desired function

This will return this sample-URL: https://graph.microsoft.com/v1.0/planner/tasks/{task-id}/details

Just replace the placeholder “{task-id}” with a string from the task to query. (It’s easiest to use if you start with the standard OData-function and then use the Web.Contents function only where needed. In the sample above, you’d add a column where you call the new function and reference the column with the task-id as the URL-input-parameter)

Enjoy & stay queryious 😉

The post How to get more out of your Graph API custom connector in Power BI appeared first on The BIccountant.


Query folding on JoinKind.Inner gotcha for Power BI and Power Query

$
0
0

If you query databases who support query folding, you’re probably very aware of every step you take and check if folding happens with every new step like so:

“View Native Query” shows folding query that’s send back to the server (if not greyed out)

Folding will (usually) happen as long as “View Native Query” isn’t greyed out.

So when doing an inner join on tables whithin the same database, I was a bit surprised to see this greyed out actually. As according to the literature, it should fold.

But guess what? After I expanded a column from it, the folding was back again:

Expanding column(s) brings back folding

Also I vaguely remember having seen queries fold, where it was actually greyed out. So I would always recommend to check the actual query in SQL Server Profiler before shouting at your screen.

Edit:

You should use this technique as well when you actually don’t need any of the expanded columns, but just use the inner join to filter out records from your “left” table. Thanks to Ed Hansberry for pointing this out:

Enjoy and stay queryious 😉

The post Query folding on JoinKind.Inner gotcha for Power BI and Power Query appeared first on The BIccountant.

Easy way to retrieve Teams data in Power BI via Flow (and other data from the Graph)

$
0
0

In a previous post I’ve described how to use a custom connector to retrieve data from the Microsoft Graph API. But this requires to register an App and adjusting the M-code in the connector itself requires some M-knowledge. So I thought it might be a good idea to share an alternative method to retrieve data from Teams for example, that works out-of-the-box. There are a couple of endpoints supported currently:

Also, these triggers are available:

I’m using the “Get messages” to retrieve all messages from a teams channel.

High level overview

Fetch the data with a standard-connector from Flow and save it to a JSON-file on your drive. From there you import it via a standard JSON-connector into Power BI.

In Flow

I started my flow with schedule trigger that refreshes it daily at midnight. Then get messages from Teams by selecting the Team- and Channel-name. You will have access to all the teams that you’re part of.

Please regard that this connector is in preview at the time of writing, so could still change. What I hope is, that the connector will be extended to include filters (for dates for example) as well, so one doesn’t have to pull all conversations with each call.

That step returns a JSON that I want so save into a file as it is, as it’s so much easier to extract the data from a nested JSON like this in Power Query than in Flow. Therefore you can add a compose step or transfer the returned body directly into the Create file-step with a little trick.

1 Compose

2 Directly

In the field “File content” you choose the “Expressions”-tab from the dynamics expressions. There you just type a space –> click on tab Dynamic Content –> choose “Body”. That will create the following formula that you simply accept: body(‘Get_messages’) (of course, you could also quickly type that in manually if you want to avoid this slightly fiddly clicking.

Back in Power BI, I import it as a JSON and simply expand all the columns I need and filter as desired.

Enjoy & stay queryious 😉

The post Easy way to retrieve Teams data in Power BI via Flow (and other data from the Graph) appeared first on The BIccountant.

How to cancel your Power Query refreshes fast in Power BI and Excel

$
0
0

If you’re working with large data or complex queries that take a long time refresh, cancelling one of those refreshes can even take longer time, especially, if the query has run for quite some time already.

Luckily, there is an easy trick to cancel refresh without loosing the work you’ve done already:

Step-by-step-instruction

  1. Open the task manager (Ctrl + Alt + Del -> select Task Manager or rightclick the NavBar )
  2. Select (one of) the “Microsoft Mashup Evaluation Containers” with a high CPU-usage (below the main process!)

    Cancel refresh by ending a Container-task

  3. Right-click mouse and choose “End task”.
  4. A pop-up-message like this will appear – just click cancel (and don’t click “Report this issue” !!):

    Cancel refresh, but don’t report an issue

  5. Another scary message might appear as well – just click close:
  6. There might also be a message that other refreshes are currently running and if you want to cancel them as well – choose cancel.

That’s all: All the changes you’ve made in the query editor and forgot to save before refreshing will be kept and you can continue your work from here.

Enjoy & stay queryious 😉

The post How to cancel your Power Query refreshes fast in Power BI and Excel appeared first on The BIccountant.

A new Table.ContainsAnywhere function for Power Query in Power BI and Excel

$
0
0

The native Table.Contains-function in Power Query tells you if one or more strings are included in one or more of its columns. But you have to be specific about which strings you search in which column. But what to do if you want to search a string in all of its columns instead? Use my new Table.ContainsAnywhere function.

Problem

In the native function, you have to pass in a record with search term and column name. So if you search for “blue” in column “Description”, your formula would look like so:

Table.Contains( YourTableName, [Description = "blue"] )

But that’s not what I want in this case. I want the formula to search through all columns within the table for the occurrence of “blue”.

Solution

One way would be to transform the list of column names of the table to a nested list where for each column name, the search-string would be added. But that gets a bit clumsy if you want to use it in a Table.AddColumn-step. So I’m going a different path instead:

Say this is my table and I want to know if the string “INCOME STATEMENT” is included in any of its fields:

Table to search all columns for a specific string

 

1. Split the table into a list of lists where each list contains all fields from one row:

Table.ToRows(Source)

Table.ToRows creates one list per row in a nested list

2. Combine that list into one, means you have all fields of the table in one big list:

List.Combine(Table.ToRows(Source))

Combine list of nested list into one (expanded) list

3. Check if this list contains the search term:

List.Contains(List.Combine(Table.ToRows(Source)), "INCOME STATEMENT")

Use cases

I’m using it to catch some specific tables from SEC-filings for example. This is the result of a Pdf.Tables-function to extract quarterly report data from a large pdf file. It shows all the different page- and table elements in it:

Table.ContainsAnywhere function in action

I’ve added a column with the function above and can now filter on “true” to extract the matching tables.

Variations

You want an case-insenstive match? Or search for multiple strings? And be able to distinguish between any and all-matches? Or even go for partial matches?

Then watch out for the next article where you get the function with all bells and whistles.

Enjoy & stay queryious 😉

The post A new Table.ContainsAnywhere function for Power Query in Power BI and Excel appeared first on The BIccountant.

Bug in Power BI R Scripts “package … was installed with different internals”

$
0
0

Today I spent many hours hunting an R-script error in Power BI and before Steph Locke came up with a solution for this, I came across a couple of posts and heard of other people, having the same problem. This blogpost is to make distribution of the solution a bit easier and to hopefully to help other folks with the same problem in the future.

The Problem

When running R-scripts in Power BI, I got all sorts of error-messages who all had one thing in common: They were complaining about one or more packages being installed by an R version with different internals.

They ran without any problem in RStudio or on other machines, just not on my own specific laptop.

The Solution

I have no idea what causes the problem, but Steph Locke showed me how she solved it before. She installed the problematic packages into R’s program-folder and pointed to this folder in a parameter, when using the function.

Step-by-step-instruction

1) Find your paths

Display your R-paths by using this function: .libPath()

The first path is the one which Power BI most likely will reference by default for the package information and the one which RStudio uses to install the packages to. The second path in the program folder belongs to a folder that also contains the R-program itself.

Now to solve the problem you have to install the packages that turn up in the error-messages into this second library folder.

2) Install package into the program folder

Therefore you have to open the RGui with admin-rights. If you don’t have a symbol for it on your desktop, you’ll find the file in the bin-folder:

install.packages("scales",lib="C:\\Program Files\\Microsoft\\R Open\\R-3.5.2\\library")

The fist function-parameter takes the name of the problematic package and in the second parameter you have to pass in the path to the library folder within R’s program folder. That’s the 2nd folder from the step above. Make sure to turn the slashes. The double-slashes might not be necessary for everyone, but for me it wouldn’t work otherwise.

3) Adjust R-script

In this last step, you have to add one or more lines of code on top of your existing code:

library("YourPackageName", lib = .libPaths()[-1] )

This formula will load the package, and the 2nd parameter will determine the path from which the file will be taken. Here, the first item from the paths from step 1 will be skipped, so the library in the program folder will be chosen instead.

Just install one package, run the script again, see if another package pops up – rinse – repeat – until you’re done 😉

Please vote for the bugfix here: https://community.powerbi.com/t5/Issues/R-visual-error-quot-package-was-installed-by-an-R-version-with/idi-p/512759

& stay queryious 😉

The post Bug in Power BI R Scripts “package … was installed with different internals” appeared first on The BIccountant.

The full Table.ContainsAnywhere function for Power Query in Power BI and Excel

$
0
0

In a previous post I introduced the concept of a function that searches for an occurrence of a character or string within all columns of a table. Here I share the full “Table.ContainsAnywhere” – function with parameters for many useful options.

Function parameters and options

  1.  The first parameter “MyTable” refers to the table to search through
  2.  The 2nd parameter “MySearchStrings” can be either a text field or a list of strings to be searched for. The function will take care of any of these cases automatically.
  3.  If the 2nd parameter is a list and this 3rd parameter is null or not speified, the function will return true if any of the list items is found within the table. But if set to “All”, all list items have to be found somewhere in the table for the function to return true.
  4.  By default, the search will be made in a case sensitive mode (as this is the default-mode in Power Query). But any entry into the 4th function parameter will turn this to a case insensitive mode instead.
  5.  By default, the string or list entry has to match fully with any entry in the table. Again, any entry in the 5th parameter swaps that to a partial match.

Function code

I encourage friends of the M-language to read through the documented code of the “Table.ContainsAnywhere”-function. It shows a fairly compact way to handle the 24 different functions that are needed for all possible function parameter combinations. For each parameter, I created one function module that covers the part of the function-logic that is specific to this parameter. These function modules also carry the case selection already. So they will deliver just what’s needed to the main query part (2), where they can then be executed sequentially. This way I avoid heavy branching with if-then-else-statements and redundant code.

Enjoy and stay queryious 😉

The post The full Table.ContainsAnywhere function for Power Query in Power BI and Excel appeared first on The BIccountant.

DAX CALCULATE Debugger

$
0
0

CALCULATE is the most powerful function in DAX, as it allows you to change the filter context under which its expression is evaluated to your hearts contempt. But with big number of options to choose from, often comes big frustration when the results don’t match expectations. Often this is because your syntax to modify the filter context doesn’t do what you’ve intended. Unfortunately CALCULATE only displays its result and not how it achieved it, so debugging becomes a challenge. This is where my CALCULATE Debugger measure can help out:

DAX CALCULATE Debugger

This is a measure that returns a text-value, showing the number of rows of the adjusted filter context table, the MIN and MAX value of the selected column as well as up to the first 10 values. Just place this measure beneath the CALCULATE-measure in question and try to find the error 😉

Just have in mind, that this only works for standalone CALCULATE-functions and not for those who are nested in other functions (who modify the filter context).

The YTD-measure is defined as follows:

YTD = CALCULATE ( [Amount], DATESYTD ( 'DimDate'[Datum] )

The code for the DAX Debugger measure looks like this:

In row 2 you fill in the filter expression from the YTD-expression (2nd argument: ‘DimDate'[Datum]). You can choose from which column the values shall be shown, just write that in rows 6, 7 and 11 ([Datum]). If you want to adjust the TOPN-figure for the sample values to be shown, replace the 10 in row 9 accordingly. If you don’t want to show sample values at all, just uncomment row 13 and comment out row 14 and 15.

Thanks to Tom Martens for providing the crucial hint of how to reference a column from a table that’s defined in a variable by using X-functions!

Further adjustments have to be made, if your filter expression uses the syntax sugar of boolean expressions like so:

CALCULATE ( [Amount], Product[Color] = "Red" )

This expression only returns a table when used as a filter argument in CALCULATE but not standalone in a DAX variable. So you’d have to translate the filter expression to the native underlying code like so:

FILTER ( ALL ( Product[Color] ), Product[Color] = "Red" )

As this cries for some automation, I’ve produced some nifty M-function that does all that autoMagically. It lives in my M-function-library so I have it at hand within PowerBI for immediate use.

The M-function

This function creates the DAX-code in the query editor. Just fill in the parameters (see below) and the DAX code will be created automatically: Just copy and paste as a new measure.

How to fill the parameters:

  1. filterExpression: DAX-code of the CALCULATE filter expression
  2. myColumName. Name of the column whose values to show
  3. MaxFilter: This is a optional parameter: Fill in a different number from 10 if you want to change the default value for the TOPN selection of the sample values to be shown.

This function detects boolean expressions automatically and produces the appropriate code.

If you don’t know how to use M-function-code, please check out Ruth Pozuelo’s video.

Enjoy & stay queryious 😉

The post DAX CALCULATE Debugger appeared first on The BIccountant.


Debug DAX variables in Power BI and Power Pivot

$
0
0

When you’re dealing with a beast like DAX you can use any help there is, right? So here I show you how you can debug DAX variables who contain tables or show the result of multiple variables at once. So you can easily compare them with each other to spot the reason for problems fast.

Please note, that currently only comma separated DAX code is supported.

Example

Watch this measure from Gerhard Brueckl’s brilliant solution for dynamic TopN clustering with others. It contains 5 variables who return tables and one variable with a scalar:

Measure with variables who contain tables and scalars

If you want to follow along how this calculation is evolving for each value in a matrix, my VarDebugMeasure will show details of every variable like so:

Measure to debug DAX variables

Method

This method is a variation of my previous blogpost, which made the elements of tables in a CALCULATE function visible. Cool thing with the new function for variables is, that you can see details of all your variables in one measure (see picture above).

Code to debug DAX variables

Code parameters

Because one cannot paste text with linebreaks into the function dialogue in the query editor, I opted for a separate query that holds the code from the measure with the variables to investigate. If you name it “DAXVariableMeasureCode” like the default in the function, all have to do is to fill in the first parameter of the function like so:

  1. A comma separated text string that lists one value for each variable: If the variable represents a table, then the column name whose values shall be shown. For scalars, the null-value has to be entered.
  2. Optional: Number for how many sample values shall be shown. Default value is 10. So if you want to change it, just put in a different number in here.
  3. Full code of the measure that includes the variable (including the measure name)

6 steps to wow

  1. Copy the function code:
  2. Create a new query in the query editor and replace the existing code with the copied code (Strg+A, Strg+V)
  3. Create another new query (named “DAXVariableMeasureCode”) where you paste the DAX-code of the measure that contains the variables.
  4. Call the function with the parameters described under “Code parameters”. In my example, this looks like so:

    Call function

  5. Copy the resulting DAX code

    Copy DAX function code

  6. Create new measure where you paste the copied DAX code
  7. Drag that measure into a table or matrix beside the original measures

Please follow along these steps in this video:

You can download the file with examples here:  Debug DAX Variables

Enjoy & stay queryious 😉

The post Debug DAX variables in Power BI and Power Pivot appeared first on The BIccountant.

Power BI administration made easy with Power BI REST API custom connector

$
0
0

Today I read the (as always) great article by Matthew Roche “Governing Power BI just got a little easier” and couldn’t find a description on how to get to this promising window with all the admin goodness. So here it comes how to build your Power BI REST API custom connector then 😉 :

Create a custom connector for the Power BI REST API

Miguel Escobar has done a fantastic job to make it super-easy for you here.

Edit 30th September 2019: This repo has just been updated and includes a version with API secret. So if you’ve downloaded that content before and got an authorization error, please get the new files.

Get data from your connector

After you’ve stored the .mez-file in the correct folder (C:\Users\<YourUserNameGoesHere>\Documents\Power BI Desktop\Custom Connectors) and open Power BI Desktop again, you will be greeted with this warning message:

Warning for custom connectors

This tells you that bad things could happen, if you import connectors from untrustworthy sources. You can read more about it here. It’s up to you to decide if you take that risk. But if you click OK, you can go on and change the security settings like described in the article.

In the Get Data search-field, type in “Power” and select “Power BI API (Beta)”

Select the Power BI REST API custom connector

You might be prompted to sign in with your Power BI user credentials.

Then you’ll see the Navigator pane and select “fxGETData” in the Functions-Folder:

Choose function for Power BI REST API custom connector

Don’t get scared by the error-message, but copy the following string into the urlPath-field:

admin/Groups?$top=50&$expand=users,reports,dashboards,datasets

This will return the top 50 groups from your enviroment with user, reports, dashboards and datasets nicely connected already. Just adjust the number as needed.

Click Apply and you should see something like this:

Make sure to click “Transform Data” and not the yellow “Load”, as there’s till some work to do on the data.

Next click on the List in the value-field:

Transform the list into a table:

Transform to table

Accept the default-settings:

Accept default settings

and click on the expand-arrows of the column:

All the goodness at your fingertips

So here are 4 of the available entities directly connected to the top level data, no need to tie them all together:

Expand to your heart’s content

Further material

Check out the API documentation, and see which other entities you can include and how to use further URI-parameters like $filter and $skip for example.

For the refresh in the service, you have to include the connector in your gateway: https://docs.microsoft.com/en-us/power-bi/service-gateway-custom-connectors .

Edit September 25th 2019: As it turns out, dragging all the tables above into one table might create some duplicates/cartesian products. So you might narrow that down a bit as needed / feasible. But I hope that getting the connector up and running will help you now retrieve the data that you actually need.

Enjoy and stay queryious 😉

The post Power BI administration made easy with Power BI REST API custom connector appeared first on The BIccountant.

Parent-Child Hierarchies with multiple parents in Power BI with Power Query

$
0
0

I’ve written about a method to dynamically flatten parent-child-hierarchies also with multiple parents some while ago here. I’ve actually used this approach for Bill-of-materials cases and refined that approach in a series starting here. There, the quantities are aggregated in M already, as they are not supposed to change. But if one wants to use the hierarchical structure to report on transaction tables where several filters shall be applied, one has to adjust the data model a bit:

DimNodes

If you have parent-child-hierarchy with multiple parents, my function will a table like below, where the children with multiple parents still reside in different rows:

Due to this, the table cannot directly be connected with the FactTable, as NodeKey is not unique. Solution is to create DimNode-table that contains only unique values from the NodeKeys. Use it as a bridge between the 2 tables and implement a bidirectional filter to the Nodes-table:

Best to hide that table from view, because the fields for the matrix visual have to come from the original tables. Now look at this:

Doesn’t that look like the perfect ragged parent-child-hierarchy-matrix?

Of course, the totals won’t add up, but that’s by design if you allocate items to multiple parents.

All measures are taken from the Russo/Ferrari Parent-Child-Pattern. My M-function produces the same structure than the table created by DAX-functions in that article. Just that it does it dynamically, so you don’t have to add new levels manually and copes with multiple parents. Sure, the dynamic aspect is useless, if you want to create a traditional hierarchy like described in this post, but for Bill-of-materials-solutions, where you aggregate the values in Power Query already, that’s pretty useful actually.

File to download:  Parent Child Hierarchy Multiple Parents

Enjoy and stay queryious 😉

Note: The code of the M-function is fairly old an quite embarrassing, but as it works, an update doesn’t get a high priority currently 😉

The post Parent-Child Hierarchies with multiple parents in Power BI with Power Query appeared first on The BIccountant.

Performance tip for aggregations after joins in Power Query and Power BI

$
0
0

This article was edited on 2nd Nov 2019. JoinKind.Local has been removed to avoid problems with merges on tables with primary keys:

In this article you’ll learn how to speed up the aggregation of joined/merged tables by orders of magnitude (I recorded up to 30 times faster execution times). This method works for merges where both table have multiple rows for each keys. If one of your tables has a primary key, the method Chris Webb describes here works just as good: Chris Webb’s article on how to improve performance on aggregations after joins using primary keys .

You can follow along the different methods in this file:  PerformanceAggregationsAfterMerges1_Upload.zip

Background

When you join a table to another table in Power Query, the UI gives you the option to either expand the columns (default) or aggregate the contents of the joint tables. That’s useful if multiple rows are returned for the rows of the table that has been joined to (left table):

Performance of native aggregation after join is very slow

But this method is extremely slow. Compared to “simply” expanding all values to new rows (which took around 5 seconds), the aggregation took around 50 seconds. The automatically generated code uses the “Table.AggregateTableColumn”-function. (see Query1_NativeAggregate)

Table.AggregateTableColumn(#"Merged Queries", "Expanded Custom", {{"Animal", each Text.Combine(_, ", "), "CombinedValues"}})

My first attempt to speed up performance was not to expand the column that contains the merged table at all, but to add a column instead with a manual aggregation function. (see Query2_AddManualAggregation)

 Table.AddColumn(#"Merged Queries", "CombinedValues", each Text.Combine([Expanded Custom][Animal], ", "))

This improved speed by 50-60%, but still, way slower than expanding all rows.

The unexpected solution

What turned out to be by far the fastest was to expand the columns to all the new rows and then “group back”. (see Query3_ReGroupIntegrated)

Table.Group(#"Expanded Expanded Custom", {"Column1", "Custom"}, {{"CombinedValues", each Text.Combine(_[Animal], ". ") }})

To my surprise, this was even faster than skipping this step (around 2 seconds, instead of 5). Means: This aggregation shortened the table from 676k rows to 26k rows. Of course, loading a shorter table should take less time. But I expected the computation of this aggregation also to take a fair amount of time. So at the end, this was actually less than the time gained by the shorting of the table.

But the surprise didn’t stop here. As I know many beginners aren’t so comfortable with editing existing code, I tried a different method (see Query4_ReGroupAddColumn): I kept the native “All rows”-operation and added the same column than in Query2_AddManualAggregation. And it was just as fast/even slightly faster than the fast Query3_ReGroupIntegrated.

So just by adding 2 steps: Expansion of the merged column and Re-Grouping I sped up the query significantly: Another mystery that the M-engine holds for me…

Please share your thoughts about and experiences with this method in the comments below. Maybe MS will change the code behind the “default-Aggregate”-function, if there is enough evidence that the alternative method proves to be superior and stable in many other cases as well.

Thanks and stay queryious 😉

The post Performance tip for aggregations after joins in Power Query and Power BI appeared first on The BIccountant.

Dynamically create types from text with Type.FromText in Power Query and Power BI

$
0
0

In this article I’ll show you how to create types from text in Power Query, enabling you to dynamically change types via functions for example. It’ll come out as a custom Type.FromText function which has been asked for in the comments of this article: https://www.thebiccountant.com/2017/01/09/dynamic-bulk-type-transformation-in-power-query-power-bi-and-m.

Problem

To transform a column to type text can be done like so:

Table.TransformColumnTypes(Source,{{"Column1", type text}})

This transforms the “Column1” from table “Source” to type text.  Now, if you want to make the type dynamic and move it to a function parameter like so:

(VariableType as type) =>

Table.TransformColumnTypes(Source,{{"Column1", VariableType}})

This returns a function dialogue as follows:

Type in “type text” like so:

You’ll receive the following error-message:

Aim is to create a function that allows this syntax ( Type.FromText )

Look at the M-code that has been generated in the formula bar: “type text” is in quotes and this makes it a text-string. The function dialogue doesn’t give an option to actually select or enter a type expression. This would be without quotes like so:

MyFunction( type text )

So if I aim to feed my function a text value to dynamically create a type from it, I need a function that returns a type and accepts a text value to identify the actual type.

Solution

I couldn’t find a native function for it, so using Expression.Evaluate as the rescue here:

Table.TransformColumnTypes(Source,{{"Column1", Expression.Evaluate("type text", [type text = type text])}})

This allows me to use a text expression as the type selector. But hey: What’s the record in the second function argument?: Now we have some type-expressions there! So nothing really gained …

(Edit: If you wonder why I haven’t used #shared as a dynamic version for the record, please read this article: https://www.thebiccountant.com/2018/05/17/automatically-create-function-record-for-expression-evaluate-in-power-bi-and-power-query/ )

The Type.FromText Function

That’s where my new function kicks in: It includes all the writing necessary and you just have to copy the code and use it. It’s a function with one parameter (the textual representation of the type) that returns the said type.

 

Currently it only contains M’s primitive types, but I guess you’ve spotted the pattern and can adjust to other types by yourself if necessary.

Edit: Actually, as it turned out, I was overthinking the task a bit. Check out Daniil’s comment below for a simpler version: https://www.thebiccountant.com/2019/11/17/dynamically-create-types-from-text-with-type-fromtext/#comment-1507

Enjoy & stay queryious 😉

The post Dynamically create types from text with Type.FromText in Power Query and Power BI appeared first on The BIccountant.

Viewing all 111 articles
Browse latest View live