We've been using QQube for a couple of years successfully and are currently working to embed Bi reports generated from Quickbooks into Dynamics365. We have successfully made the connection on a couple of reports.
Now we realize that by modeling our own dataset it would make the presentation of data in D365 much easier. I have looked at the SQL Anywhere DB structure and have the following 2 questions:
1.) Can someone point me to the differences based on the prefix on a table? vd_ / vdf_ /vf_ /vl_
2.) Is anyone aware of a connector for SQL Anywhere to Microsoft Dataverse? If I could load the QQube data I could create the views in D365 bypassing the need for BI and dataset refreshes.
#1 above is the most pressing
To connect to our database, all you need to use is the two ODBC DSN's. Information on those is found at the bottom of this guide: QQube 7.x Installed Components | CLEARIFY
With regard to the schema, I would open up a Microsoft Access or Power BI, or PowerPivot example. Very simple: one FACT table, with multiple DIMENSIONS.
Connection naming is very easy: FACT has LinkToClassID, and DIMENSION has LInkForClassID
And of course, you don't tie one FACT table to another e.g. one analytic to another as everything is already self-contained. For instance, inventory already contains sales, purchases, g/l detail. Read more here: Selecting QuickBooks Analytics in QQube | CLEARIFY
I would also read this Pinned Forum Document which should give you detailed information on the underpinnings of QQube, and how you should approach your wishes here: I don't see a mapping document or table schemas | CLEARIFY
Thank you for the info above on #1. It is what we suspected as it is close to Star Schema but could not locate a definitive answer in your documents.
Again we've been using your product for awhile with great success and understand the ODBC connections. With that said we do not want to use ODBC to bring in the data into M$ Dataverse but would prefer to connect directly via a dataflow using a connector to populate the data into the D365 environment. I wasn't sure if anyone knew of a connector for SQL Anywhere / Sybase that could handle this.
If we have to we can do a single dataset and then bring it to M$ Dataverse in a separate step but would prefer to remove any interim steps if possible. Reviewing your samples several query names would need to be changed so that there was not confusion in measures when the query names are the same but based on different data. We can then use Tabular Editor to pull all of the existing measures that we might want unless you have a better way to move those.
Here are the items and how they would need to be changed I believe (easy enough if we can't go straight to M$ DV.
With all due respect, you actually made it too tough on your self - in several areas. I am sorry that you spent the time to delineate your details. I am also sorry that we were not emphatic or clear in our first response.
Just open up the Microsoft Access examples, and you will see ALL the connections for each analytic. Then you won't have to guess or surmise what belongs where. There is only one way to connect the data anyway.
Yes we provide star schemas in the traditional data warehousing methodology. One FACT table and multiple DIMENSIONS
Second there is no redundancy anywhere. You can't use the same dimension on several FACT tables e.g. account, company, class DIMENSIONS on profit and loss FACT and also Balance sheet FACT. They have to be aliases.
Second, with regard to "common dimension" We may have certain fields in the vd_account for one analytic, but have a slightly different vd_accountBS for use with the Balance Sheet analytic. Yes vd_company may be used on different analytics - but you would still need to create an alias. IF there were several FACT tables in your modeling.
With regard to a direct connector, the ODBC DSN's are the only way to connect to the QQube database - and are the easiest and simplest form of connection anyway; permissions are properly affixed, and the passwords are listed in the document we gave you earlier.
What would be best is to create a linked server - if that is possible with the product you are connecting to. That is how Microsoft Access works (or even SQL Server).
But use the Microsoft Access examples (and bring things over in the schema that already exist) - it will make life very easy for you. There is absolutely no reason for you to re-invent the wheel.
Thank you for your response. You were very clear in your initial response and I understand what you are saying in respect to ODBC being the easiest to connect and grab data for BI. Unfortunately it is not the easiest when wanting to use the data in M$ Dataverse due to a limitation on creating relationships based on ODBC data.
I was able to connect to the data via a sybase connector and pull everything that was needed. It is populating a BI report correctly and I can validate the data against a report with ODBC as the source. I simply need to add the connector to a dataflow in M$ Power Platform and I can use the native data in D365 views instead of embedding a linked Power BI report. Also by going this route we can update D365 fields and create D365 calculated fields based on the data from QB.
ODBC would force the following path for the data to arrive in an embedded view
QQube > BI DataSet > BI Report > D365 Embedded BI Report
Direct DB access via Sybase connector
QQube > M$ Dataverse (data that can be filtered live via system and local views in D365).
2 steps with "live" data versus 4 steps with a "snapshot" is worth the effort on this.
If you want the steps we took to make this work let us know and we'll DM them to you.
thanks for the info above as it helped clear up some questions we had
One thing to add. I may be reading between the lines, but it appears that you are trying to create one FACT table with all measures from all analytics. Just not possible - for any number of reasons.
First, the SDK doesn't make that possible. End of story.
But also realize that each analytic is specific unto itself - and doesn't bleed into other analytics.
You may have certain transactions used in one analytic and not another. Or logic being used in a particular situation, that does not match QB logic in another analytic.
Each field name is unique within the FACT tables. For instance you might see our online documentation show "Line Sales Amount" in both the Sales Analytic and Inventory Analytic.
When you pull across the data from QQube, they will have different names. For purposes of ease of use to the end user, we have field aliases in the Excel Add-In, or PowerPivot, or Power BI.
Second, the logic for the Sales Analytic is different than what is used for the Inventory Analytic as Inventory only looks at specific items, but the sales looks at all sales items.
There are specific reasons for creating analytics within a particular silo - they serve completely different needs.
Unfortunately we are not able to accommodate a direct connection to the database, other than OLE DB
Does your platform not have an OLE DB connector? Then you could simply use this connection:
Noted - we actually are using the default names now that we have access to the DB. One of the things that our team fought with over the past couple of years is the following scenario:
JIT (Just in Time) Inventory
We need to be able to calculate the following for an individual item:
QTY - US Warehouse
QTY - Transit (Sea Container ) with estimated arrival. Not summary data but actual container arrival dates(s)
QTY - Transit (Air) with estimated arrival. Not summary data but actual arrival dates(s)
QTY - Sales Order(s). Not summary data but actual promised ship dates(s)
QTY - Open PO. Not summary data but actual completion dates(s)
QTY - Run Rate - Monthly projected amount based on prior months and projected next Quarter based on Next Quarter Prior Year (weekly amounts)
etc, etc, etc
We accomplished 95% of it with the predefined "silos" but when we need to calculate arrival dates and ship dates of individual transactions behind the summary that is where we're missing the next level of data.
We just tested it in a BI with data in a consolidated set of queries and it works perfectly even giving us historical purchase amt$ to monitor price creep and tariffs.
Great product! Love QQube!
We connected directly. See below. all good
James and Chuck,
Thanks for posting this so all can see. It is an interesting discussion.
Choose a location
Your session has expired. You are being logged out.