Quantcast
Channel: Pentaho – Francesco Corti
Viewing all 39 articles
Browse latest View live

Discover the documents stored into your Alfresco sites… interactively!

$
0
0

Thank to the requests of some A.A.A.R. users, a brand new report showing the documents stored in every Alfresco site, has been developed. The report is interactive in the way you see below.

Report12_menu

Below a preview of the report for a test case.

Report12

The report will be part of the incoming AAAR v2.2 but if you want to use it or customize it, you can download from github in the PostgreSql version or the MySql version.

Enjoy your analytics.

The post Discover the documents stored into your Alfresco sites… interactively! appeared first on Francesco Corti.


A.A.A.R. v2.2 with interactive dashboards and free analysis

$
0
0

Finally the date came!

Starting from the requests and the collaboration of some of you, the brand new A.A.A.R. v2.2 has been released. I would like to explicitly thank strategicfunctions.com for the contribution. As usual it has been an interesting experience.

The main new feature is obviously the interactive dashboard on audit trail.

But also the two brand new free analysis on audit trail and repository.

Last but not least, a brand new report on documents in sites (strongly requested from the community).

I have lots of new ideas to develop but what would you like to see more in the A.A.A.R.?

A.A.A.R.

The post A.A.A.R. v2.2 with interactive dashboards and free analysis appeared first on Francesco Corti.

Uploading a mondrian schema to Pentaho using PDI

$
0
0

In this post is shared the solution to upload a mondrian schema to Pentaho BA Server, using the REST API through a transformation of PDI. If you take a look to this thread of the Pentaho forum, the goal seems to be a common problem so we think it could be a good idea to share the solution with the community. I hope this post will be helpful.

Development environment

The source code is developed and tested on a Windows platform and a Linux Ubuntu 14.04 LTS platform. Pentaho BA Server and Pentaho Data Integration are both in the 5.2 version.

Use case

Starting from a file containing the mondrian schema (a XML file), our goal is to develop a PDI transformation to define a Pentaho BA Server Data Source. Of course we would like to define the data source on the mondrian schema so we would like to define a so called “Analysis Data Source”.

The strategy

Thank to the Pentaho BA Server REST API, our strategy is to use the service described below to create the data source.

http://<pentahoURL>/pentaho/plugin/data-access/api/mondrian/postAnalysis

To create (and replace) an Analysis Data Source it’s easy: simply invoke a POST call to the REST, using a multipart request. Of course this goal could be easy using a programming language, but we would like to use a transformation of Pentaho Data Integration (called Kettle). Unfortunately Kettle is not so smart when you have a multipart request.

Description of the solution

Below is described the transformation of Pentaho Data Integration.

Pentaho Upload Data Source

As you can imagine, the core of the solution is in the ‘Generate multipart entity’ step and in the ‘HTTP Post’ step. But before looking at this, let’s share what is in the ‘Generate rows’ step. There you are going to find the basic parameters to make everything properly work.

Pentaho Upload Data Source

  • uploadAnalysis contains the file name with the mondrian schema. In the ‘Add root file’ step, this file name will be completed with the absolute path.
  • catalogName and origCatalogName contains the name of the mondrian schema (the same that is described in the XM file).
  • parameters… ok, it’s clear! 😉

Below the source code of the ‘Generate multipart entity’ step that defines three output parameters.

  • requestEntityValue containing the multipart entity to post in the request.
  • contentType and contentLength containing the informations about the request entity.
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import org.apache.commons.httpclient.methods.multipart.MultipartRequestEntity;
import org.apache.commons.httpclient.methods.multipart.FilePart;
import org.apache.commons.httpclient.methods.multipart.StringPart;
import org.apache.commons.httpclient.methods.multipart.Part;
import org.apache.commons.httpclient.params.HttpMethodParams;

public boolean processRow(StepMetaInterface smi, StepDataInterface sdi) throws KettleException {

 Object[] r = getRow();
 if(r == null){
  setOutputDone();
  return false;
 }

 String uploadAnalysis = get(Fields.In,"uploadAnalysis").getString(r);
 String catalogName = get(Fields.In, "catalogName").getString(r);
 String origCatalogName = get(Fields.In, "origCatalogName").getString(r);
 String parameters = get(Fields.In, "parameters").getString(r);

 try {

  File filePart = new File(uploadAnalysis);
  Part[] parts = {
   new FilePart("uploadAnalysis", filePart),
   new StringPart("catalogName", catalogName),
   new StringPart("origCatalogName", origCatalogName),
   new StringPart("parameters", parameters)
  };
  MultipartRequestEntity requestEntity = new MultipartRequestEntity(parts, new HttpMethodParams());

  ByteArrayOutputStream bOutput = new ByteArrayOutputStream();
  requestEntity.writeRequest(bOutput);
  String requestEntityValue = new String(bOutput.toByteArray());
  String contentType = requestEntity.getContentType();
  String contentLength = String.valueOf(requestEntity.getContentLength());

  Object[] outputRow = createOutputRow(r, data.outputRowMeta.size());
  get(Fields.Out, "requestEntityValue").setValue(outputRow, requestEntityValue);
  get(Fields.Out, "contentType").setValue(outputRow, contentType);
  get(Fields.Out, "contentLength").setValue(outputRow, contentLength);
  putRow(data.outputRowMeta, outputRow);

  return true;

 } catch(FileNotFoundException ffNotFoundEx) {
  logError("File '" + uploadAnalysis + "' not found!!");
  throw new KettleException(ffNotFoundEx);
 } catch(IOException ioEx) {
  logError("Error generating the value of the multipart request!!");
  throw new KettleException(ioEx);
 }
}

Below the ‘HTTP Post’ step that, finally, send the POST request.

Pentaho Upload Data Source Pentaho Upload Data Source

Conclusion

In this post, gently developed by Stefano Massarini, is shared the solution to upload a mondrian schema to Pentaho BA Server, using the REST API through a transformation of PDI. If you would like to use the solution and evaluate it, you can download it here.

The post Uploading a mondrian schema to Pentaho using PDI appeared first on Francesco Corti.

Review of the Pentaho Data Integration video course by Itamar Steinberg

$
0
0

pentaho data integration video courseIn this post I have the opportunity to share the review of a brand new Pentaho Data Integration video course by Itamar Steinberg. The full name of the course is mastering data integration (ETL) with pentaho kettle PDI and is available for purchasing on the Udemy website.

The video course is composed by 80 lectures and more then 10 hours of content. It is a walk-through of a real ETL project using Pentaho Data Integration (also known as Kettle), starting from the beginning of the design of the ETL with some easy steps that becomes more and more complex, layer by layer, as you go forward. This Pentaho Data Integration video course also cover some basic concepts of data integration and data warehouse techniques.

Intended audience and required skills

Itamar declares that “the course is about taking you from the beginning and transfer you to a master of Pentaho kettle”. The required skills for the attendees are: basic SQL and database design. Of course, we are talking about a technical video course, so the attendees are Developers or Software architects.

After viewing all the video course, I’m agree with Itamar about the requested skills and I think that he starts from the very basic concepts of a generic ETL project (why an ETL is useful, the OLAP model, the basic tools like the SQL Client, ecc.) but also covers the most important topics of the Pentaho Data Integration (aka Kettle). IMHO I think that the goal to be a master of PDI is quite hard to reach, not because of quality of the video course, rather because of the huge amount of things that Kettle is able to do. In other words: sometimes Itamar seems to sell a little bit his good product. 😉

Sections and modules

UdemyThe video course is quite long (more than 10 hours) and is separated in fourteen sections. But we can try to see a more general separation in chapters, useful to review it. Below the chapters I can see:

  1. Introduction (Section 1). This is a kind of presentation of the author and the video course. This is mainly advertising.
  2. Theory and concepts of data integration (Section 2). Introduction to some basic concepts of the Data Warehousing techniques.
  3. Setup of the environment (Section 3 to 4). Description and installation of all the tools requested from the ETL project that is discussed ahead.
  4. Development of a real ETL project using Pentaho Data Integration (Section 5 to 13). Description and development of a real ETL project using Pentaho Data Integration. Of course, this is the core of the video course.
  5. Conclusion (Section 14). This is a kind of resume of the video course by the author.

Strengths and weakness

Below I would like to go straight to the goal the review in a brand new and clear way. I would like to share some strengths and weakness of the video course, of course from my modest and honest point of view.

plusPentaho Data Integration skills. What do you expect from a PDI course? Of course, to learn from someone that is a Professional on Pentaho Data Integration! Ok, with Itamar you will find it and everything that you will see is a clear result of his experience in real life projects.

plusData Warehousing techniques. Most of the books I read and most of the video course I have seen are strictly based on the topics they want to cover. Of course this is correct, but with this video course you will get in touch also with some basics of the Data Warehousing techniques… not all the basics but some of the most relevant basics. 😉

plusPractical approach to ETL project. Usually I prefer a theoretical approach the Data Warehouse and Business Intelligence projects, but Itamar takes his inspiration from the famous ‘The Data Warehouse Toolkit’ (R.Kimball) that is a real bible. For this reason the video course is definitely coherent with this practical approach.

minus The video course should have being optimized to reduce the duration. IMHO the video course is quite long and some non relevant parts could be cutted off in post production. (More than) Ten hours are an important amount of time, for an attendee, to be concentrated on a video course (Packt Publishing, for example, indicates two hours for a single video course).

Conclusions

As usual, the final question that I always think about, is: do I suggest the Pentaho Data Integration video course to someone else? In this case my answer is yes. I think that Itamar has done a good job and, above all, this could be useful for a lot of new professionals in Data Warehousing and Business Intelligence projects. Of course some improvements will give to the course a higher quality (especially compacting the duration and natural english speaking) but the goal to share his experience with the attendees is definitely reached. Thank you Itamar for the opportunity to review your course.

The post Review of the Pentaho Data Integration video course by Itamar Steinberg appeared first on Francesco Corti.

A.A.A.R. v2.3 to analyze multiple installations of Alfresco

$
0
0

Want to analyze multiple installations of Alfresco? Starting from v2.3, A.A.A.R. is able to manage one, two, three or dozens of Alfresco installations without any specific limitations. To understand how, take a read to the documentation.

multiple_installations_of_alfresco

The key point is that you can set up all the multiple installations of Alfresco you have, simply adding one (or more) row on the ‘dm_dim_alfresco‘ table directly in the A.A.A.R. Data Mart. Once you have properly run the extraction from all the multiple installations of Alfresco, you can start analyzing your data as usual, using dashboards and reports.

With the A.A.A.R. v2.3, all the dashboards and reports have been re-developed to manage  all the multiple installations of Alfresco you would want to analyze.

multiple_installations_of_alfresco

One more time: thank you to the community and the ones that submit to me all the relevant suggestions from the use cases.

The post A.A.A.R. v2.3 to analyze multiple installations of Alfresco appeared first on Francesco Corti.

Developing a smart filter for Pentaho dashboards

$
0
0

In this post is described the development of a smart filter using the Pentaho CDE dashboards of the Pentaho Analytics Platform. The smart filter is presented in a first initial version, defined as a single level filter. The post is developed as a step by step description on how to reach the goal.

In this post I would like to share the development of a more advanced filter, different from the default and well-known drop-down menus, check-boxes, ecc. The goal is to make the developers able to design results more advanced and modern, respect to the awful dashboards that sometimes I see. 😉

The goal

One of the most useful features of the Pentaho suite is that you can develop your own dashboards and analytics with a real framework, ready to use. Usually the dashboards are defined by charts, tables or several other components but often the interaction with them requests the development of filters, menus or configuration components.

In this post is shared an example of smart filter based on a mondial database, useful for the purpose of the demo. The goal is to develop a filter on a simple chart that lists the first ten languages spoken in every continent of the world.

Preparing the environment

All the development has been on the platform described below.

  • Operating System: Linux Ubuntu 14.04.LTS (vanilla installation).
  • Java v1.7.0._65 (installed with ‘sudo apt-get install default-jre’).
  • MySql and Workbench (installed with ‘sudo apt-get install mysql-server mysql-workbench’).
  • Pentaho Analytics Platform v5.2 (tested also on v5.0 and v5.1).

In the version of the Pentaho Analytics Platform I used, I had to update the Community Dashboards Framework (CDF), the Community Data Access (CDA) and the Community Dashboard Editor (CDE) to the latest TRUNK version because of some relevant bugs. This task is easy to develop using the Pentaho Marketplace… very easy to use with a single ‘click and restart’ approach. 😉

Preparing the “mondial” database

Before developing the dashbards, the database should be set. For testing purpose I choose the MONDIAL database from Georg-August-Universität of Göttingen. It has been very complete and interesting to use. 😉

There you can find an interesting and complete Entity-Relational structure documented. There a lot of available versions of the database and I tried the MySql one.

To prepare the database there are two different scripts to run using the MySql Workbench:

My choice was to focus the tests on the portion of the schema that defines the Continents that contain the Countries, with several languages spoken. Below the portion of the schema used in the development.

Mondial-ER

Presentation of the dashboard

As described before in the post, the dashboard develops a filter on a simple chart that lists the first ten languages spoken in every continent of the world. I think that a video that shows the interaction with the user is preferable to a boring description. Below the video that shows the final result.

The technical approach

The result described before can be obtained working around two main components, you can manage using the Community Dashboard Editor (CDE), in particular the Components Layer describe below.

smart_filter

More in detail, the result can be obtained working on the components described below:

  • The Popup component with id ‘filterGeoPopupComponent’, hidden in the ‘hiddenPopups’ row of the HTML layer.
  • The Button component with id ‘filterGeoButton’ that shows the popup using the javascript function described below, once is pressed.
function f(e) {
 render_filterGeoPopupComponent.popup($(e.target));
}

Into the Popup component, the Multiple select component with id ‘filterGeoLevel1Component’ is the real filter of the Continent.

Source code

Last but not least, I would like to share the source code of the dashboard to make everyone able to check the technical details. The zip file you can download here, contains the three files below.

  • smart_filter_1.cda
  • smart_filter_1.cdfde
  • smart_filter_1.wcdf

To try (and use) it, simply upload the files into the repository using the Pentaho Analytics Platform interface (Browse button), and open the wcdf.

Conclusion

In this post is described the development of a smart filter using the Pentaho CDE dashboards of the Pentaho Analytics Platform. The goal is to develop a one-level filter on a simple chart that lists the first ten languages spoken in every continent of the world. The goal is to make the developers able to design dashboards more advanced and modern.

The post Developing a smart filter for Pentaho dashboards appeared first on Francesco Corti.

Developing a two levels smart filter for Pentaho dashboards

$
0
0

After the description of the basic version of the smart filter, in this post is described the development of a two levels smart filter using the Pentaho CDE dashboards of the Pentaho Analytics Platform. The post is developed as a step by step description on how to reach the goal.

In this post I would like to share the development of a more advanced filter, different from the default and well-known drop-down menus, check-boxes, ecc. The goal is to make the developers able to design results more advanced and modern, respect to the awful dashboards that sometimes I see. 😉

The goal

One of the most useful features of the Pentaho suite is that you can develop your own dashboards and analytics with a real framework, ready to use. Usually the dashboards are defined by charts, tables or several other components but often the interaction with them requests the development of filters, menus or configuration components.

In this post, in the same way we developed the first example in a previous post, is shared an example of smart filter based on a mondial database, useful for the purpose of the demo. The goal is to develop a filter on a simple chart that lists the first fifteen languages spoken in every country of the world (in the previous example we talked about continents). The two-levels filter is defined by the Continent level that contains the Countries.

Preparing the environment

All the development has been on the platform described below.

  • Operating System: Linux Ubuntu 14.04.LTS (vanilla installation).
  • Java v1.7.0._65 (installed with ‘sudo apt-get install default-jre’).
  • MySql and Workbench (installed with ‘sudo apt-get install mysql-server mysql-workbench’).
  • Pentaho Analytics Platform v5.2 (tested also on v5.0 and v5.1).

In the version of the Pentaho Analytics Platform I used, I had to update the Community Dashboards Framework (CDF), the Community Data Access (CDA) and the Community Dashboard Editor (CDE) to the latest TRUNK version because of some relevant bugs. This task is easy to develop using the Pentaho Marketplace… very easy to use with a single ‘click and restart’ approach. 😉

Preparing the “mondial” database

Before developing the dashbards, the database should be set. For testing purpose I choose the MONDIAL database from Georg-August-Universität of Göttingen. For further details about its structure take a look to the post here where the database is described.

Presentation of the dashboard

As described before in the post, the dashboard develops a filter on a simple chart that lists the first fifteen languages spoken in every country of the world. I think that a video that shows the interaction with the user is preferable to a boring description. Below the video that shows the final result.

The technical approach

The result described before can be obtained with the same strategy described in the previous post, where we focused on the Community Dashboard Editor (CDE), in particular the Components Layer shown below.

smart_filter 2

More in detail, the result can be obtained working on the components described below:

  • The Popup component with id ‘filterGeoPopupComponent’, hidden in the ‘hiddenPopups’ row of the HTML layer.
  • The Button component with id ‘filterGeoButton’ that shows the popup using the javascript function described below, once is pressed.
function f(e) {
 render_filterGeoPopupComponent.popup($(e.target));
}

Into the Popup component, the Multiple select components with id’filterGeoLevel1Component’ and ‘filterGeoLevel2Component’ are the real filter of the Countries.

In this case, the development is quite advanced because we have to manage the first selection of the Continents that filter the Countries and, later, the query about the languages.

Source code

Last but not least, I would like to share the source code of the dashboard to make everyone able to check the technical details. The zip file you can download here, contains the three files below.

  • smart_filter_2.cda
  • smart_filter_2.cdfde
  • smart_filter_2.wcdf

To try (and use) it, simply upload the files into the repository using the Pentaho Analytics Platform interface (Browse button), and open the wcdf.

Conclusion

In this post is described the development of a smart filter using the Pentaho CDE dashboards of the Pentaho Analytics Platform. The goal is to develop a two-levels filter on a simple chart that lists the first fifteen languages spoken in every Country of the world. The goal is to make the developers able to design dashboards more advanced and modern.

The post Developing a two levels smart filter for Pentaho dashboards appeared first on Francesco Corti.

A.A.A.R. v2.4 with responsive dashboards


Developing a Bootstrap text field with CDE dashboard

$
0
0

bootstrapIn this post I would like to share how to develop a Bootstrap text field in a Pentaho dashboard. Pentaho and Bootstrap are long time friends and various solutions are available in the web (for example Diethard Steiner blog) or directly in the Pentaho Marketplace (for example the Ivy Bootstrap Component). In this post I would like to share “my” solution for this purpose, not because I think this is “better” or “worst”, but because I used it with satisfaction. My purpose was to develop a Bootstrap dashboard using as much as possible the standard CDE components, with the minimum impact and dependency.

About Bootstrap

I would prefer not to describe in detail what Bootstrap is but, in few words, Bootstrap is the most popular HTML, CSS, and JS framework for developing responsive, mobile first projects on the web.

Have you ever seen Bootstrap in action? Yes, for sure, for example in Twitter website or Delicious web site.

Want to understand better what Bootstrap is? Take a look to the official website.

Description of the environment

Coming to our use case, I start the description of the solution from the development environment. Below a brief description of it.

  • Linux Ubuntu 14.04 LTS. Nothing more and nothing less than a vanilla installation.
  • Java v1.7.0_60. If you want to know how to install it, take a look here.
  • Pentaho Business Analytics Platform Community Edition v5.2.

Developing the empty Bootstrap dashboard

The first task of our goal is to develop an empty dashboard with Bootstrap support. To develop this task, Pentaho B.A. Platform has everything we need.

Let’s start creating a new CDE dashboard using the File menu in the Pentaho desktop. Below a screenshot describing it.

create_bootstrap_dashboard

Now that the CDE dashboard is in opened, let’s save it with the Bootstrap support. To develop this, simply access to the settings menu. Below a screenshot describing it.

settings_bootstrap_dashboard

To complete the task, in the Layout Panel you get after saving the dashboard, let’s develop the simple structure described in the screenshot below.

boostrap_dashboard_layout

As you can see, the structure is composed by an initial row with two columns and each column contains a sub-row; the one called ‘field1’ where we are going to set up an example of default text field and a row called ‘field1Bootstrap’ where we are going to set up the Boostrap text field.

Save the dashboard and everything is ready for the next task.

Developing the Bootstrap text box

Now that we have an empty Bootstrap dashboard, let’s define a standard text field of the CDE tools. To develop this task, let’s access to the Components Panel and define a Text Input Component (you can choose the components from the ‘Selects’ menu on the left).

The Text Input Component should be defined with defaults in the way is described in the screenshot below. Please, note that the components is going to be rendered in the ‘field1’ row so on the left of the resulting dashboard.boostrap_dashboard_cpomponents_1

Now that the default Text Input Component is developed, let’s define another Text Input Component with few differences. Below a screenshot describing it.

boostrap_dashboard_cpomponents_2

Please, note that the differences are two: of course the different HtmlObject (that is going to render the field in the right side of the dashboard) and the Post Execution property. The Post Execution property is defined with the cose described below.

function f() {
 $('#' + this.name).addClass('form-control');
 $('#' + this.name).removeAttr('value');
 $('#' + this.name).attr('placeholder','Enter value here');
}

This javascript code retrieves the HTML object using JQuery and add a new attribute class="form-control", remove the attribute value and add another attribute placeholder="Enter value here". This HTML manipulation is useful to make the <input> tag generated from the CDE Tools, compliance with the Bootstrap syntax. To understand more about the Bootstrap syntax, please take a look here for the input tag.

As you can easily understand, this is the first and only customization requested to make the CDE Tools compliance with the Bootstrap syntax.

The final result

Now that the Bootstrap dashboard is developed, let’s save it and see the preview. Below a screenshot with it.

boostrap_dashboard_preview

As you can see, on the left we have the standard text field of the CDE tools and on the right the same text field, but compliance with Bootstrap look&feel. All the development and interactions are the same because you are working with a Pentaho standard component.

Goal reached!

About the strategy and the solution

I think we are agree that this approach could be followed to develop all the Bootstrap look&feel for all the available components of the CDE Tools. In some cases this is quite tricky and probably limited, but in all of my past developments I did not find relevant limits. In other words, this approach works fine for the most common needs!

Conclusion

In this post I share how to develop a Bootstrap text field in a Pentaho dashboard using the standard components of the CDE tools. Several solutions to this goals has been developed in the past but this is yet another one. I used this approach with satisfaction developing the A.A.A.R. responsive dashboards and I hope this will helps some of you in the same way.

The post Developing a Bootstrap text field with CDE dashboard appeared first on Francesco Corti.

Developing Bootstrap buttons with Pentaho CDE dashboard

$
0
0

bootstrapIn this post I would like to share how to develop a Bootstrap button in a Pentaho dashboard. Pentaho and Bootstrap are long time friends and in a past tutorial I started to dive deep in the development with a tutorial about the text field. Various solutions are available in the web for this purpose (for example Diethard Steiner blog or the Ivy Bootstrap Component) but here I would like to share “my solution” for this purpose, not because I think this is “better” or “worst”, but because I used it in a real life scenario with satisfaction.

Description of the environment

The development environment used for this purpose is the same used in the tutorial about the text field. The only difference, this time, is that I started to use Pentaho 5.3 (Release Candidate). For the purpose of this tutorial, nothing really changes so I think that the solution could be used also with more dated versions of the Pentaho suite.

Developing the layout

Starting from an empty CDE dashboard (if you want to know how to develop an empty CDE dashboard, you can take a look here) let’s define the layout. The layout of the dashboard is quite simple and is describe below in the picture.

bootstrap_buttons_layout

No special configurations, no particular customizations; only some simple rows with some named columns inside.

Developing the default Bootstrap button

Now that we have an empty Bootstrap dashboard with a useful layout, let’s define a standard button component using the CTools. To develop this task, let’s access to the Components Panel and define a Button Component (you can choose the components from the ‘Others’ menu on the left).

The Button Component should be defined in the way is described in the screenshot below.

bootstrap_buttons_component

The only customizations are about:

  • The name the component (feel free to set the name you’d prefer).
  • The htmlObject indicating the rendering position in the layout.
  • The post execution function, we are going to describe below.

The post execution function is the real only difference with a standard development of the button. Below the description of the content of the function.

function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-default");
}

This javascript code retrieves the HTML object using JQuery and add a new attribute class="btn btn-default" according with Bootstrap syntax. To understand more about the Bootstrap syntax on buttons, please take a look here. As you can easily understand, this is the first and only customization requested to make the CTools component, compliance with the Bootstrap syntax.

Now that the Bootstrap dashboard is developed, let’s save it and see the preview.

bootstrap_button_preview

Developing all the different types of Bootstrap buttons

Now that we know how to develop a Bootstrap button, let’s see how to develop all the different type of buttons that Bootstrap makes available. The difference between all the different buttons is only related to the post execute function. Below the description of the function with a preview of the button.

function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-primary");
}

bootstrap_button_preview_primary

function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-success");
}

bootstrap_button_preview_success

function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-info");
}

bootstrap_button_preview_info

function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-warning");
}

bootstrap_button_preview_warning

function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-danger");
}

bootstrap_button_preview_danger

function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-link");
}

bootstrap_button_preview_link

 

How to manage buttons’ size

Exactly with the same approach, we can easily manage the size of buttons. According to Bootstrap’s rules let’s develop what we can see below.

bootstrap_button_preview_sizes

// Primary large button (the blue one)
function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-primary btn-lg");
}

// Default large button (the white one)
function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-default btn-lg");
}
// Primary small button (the blue one)
function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-primary btn-sm");
}

// Default small button (the white one)
function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-default btn-sm");
}
// Primary extra small button (the blue one)
function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-primary btn-xs");
}

// Default extra small button (the white one)
function f() {
 $('#' + this.htmlObject + ' button:first-child').addClass("btn btn-default btn-xs");
}

Conclusion

In this post I share how to develop all the Bootstrap buttons in a Pentaho dashboard using the standard components of the CTools. Several solutions to this goals has been developed in the past but this is yet another one. I used this approach with satisfaction developing the A.A.A.R. responsive dashboards and I hope this will helps some of you in the same way.

The post Developing Bootstrap buttons with Pentaho CDE dashboard appeared first on Francesco Corti.

Pentaho is going to be acquired from Hitachi Data Systems (HDS)

How to customize A.A.A.R. dashboards with your Company logo

A.A.A.R. v2.4.1 for the brand new Pentaho 5.3

A.A.A.R. v3.1 with workflow analytics

$
0
0

Probably the last business element that A.A.A.R. didn’t have was the analytics on workflows instances and tasks. Recently I received some requests on that topic so, the A.A.A.R. v3.1 comes with the native support for workflows. Now audit trail, documents, folders, workflows and custom properties are fully supported in your custom analytics.

workflow_dashboard

Developing the workflow analytics support, some missing services of Alfresco became clear but the AAAR Alfresco AMP is able to solve the issue very easily.

With this last release I would like to explicitly thank all the people that send me feedback about this project and the hundreds of persons that weekly download it from SourceForge. Good stuff! :-)

The post A.A.A.R. v3.1 with workflow analytics appeared first on Francesco Corti.

A.A.A.R. for documents, folders, audit trail, workflow analysis and custom metadata

$
0
0

Starting from version 3.1, A.A.A.R. Alfresco analytics makes poker with repository analysis (documents and folders), audit trail, workflow and custom metadata analytics. A wide variety of reports, dashboards and free analysis are available for free on all the devices (mobiles and not), thanks to a responsive user interface.

data_marts

The whole solution is completely in your hands if you want to customize it, extend, apply to multiple Alfresco instances or export directly in the Alfresco’s repository.

It’s not enough for your needs? Contact me to share what you ask for and I’ll evaluate to add the feature to the next release.

Enjoy analytics!

The post A.A.A.R. for documents, folders, audit trail, workflow analysis and custom metadata appeared first on Francesco Corti.


Pentaho 5.4 release with A.A.A.R. 3.1 compliance

$
0
0

pentaho-logo.pngToday Pedro Alves announces the first “Hitachi Data System version”, with the brand new Pentaho 5.4 Community and Enterprise Editions. Just some days later than the announcement that the Open Source heritage won’t be abandoned in the nearest future (let’s see for the far future). At the same time, A.A.A.R. 3.1 has been tested and is fully supported by this latest release of Pentaho.

So, what to say: enjoy Pentaho 5.4… together with A.A.A.R. analytics.

The post Pentaho 5.4 release with A.A.A.R. 3.1 compliance appeared first on Francesco Corti.

Sparkl error on Pentaho 5.4.0.1-130 with multiple modules

$
0
0

bugThanking the community, an issue has been submitted to me about a conflict between Saiku module, Pivot4J Analytics module and A.A.A.R., on Sparkl App builder on Pentaho 5.4 (more precisely on 5.4.0.1-130). Below a brief description of the tests, hoping to help the community to avoid the issue and Pentaho team to solve it in the future release.

UPDATE: Read here for an update (and the solution) to the conflict.

Description of the test environment

Starting from a vanilla installation of Ubuntu 14.04.02 LTS, the Oracle Java 1.7.0.71 has been installed and used below. Pentaho BI server 5.4.0.1-130 has been downloaded from Sourceforge and installed following this tutorial.

Test 1 – Installation of the three modules

Once the environment is ready, the modules listed below has been installed from the Pentaho marketplace.

The detailed instructions to install the modules, can be found in this page. As requested from Pentaho Suite, when the modules are installed from the marketplace, the Pentaho server needs to be restarted.

Once the Pentaho BI Server is restarted, access to Pentaho User Console as administrator (otherwise you will not have permits to access to configuration dashboards). Click on Tools -> Sparkl and a dashboard will appear with an error showed below.

error

Test 2 – Removing Pivot4J Analytics module

Removing the Pivot4J Analytics module from the marketplace (and of course, restarting again Pentaho) the error disappears.

Adding again the Pivot4J Analytics module from the marketplace (and of course, restarting again Pentaho) the error appears again.

Test 3 – Removing Saiku module

Removing the Saiku module from the marketplace (and of course, restarting again Pentaho) the error disappears.

Adding again the Saiku module from the marketplace (and of course, restarting again Pentaho) the error appears again.

Test 4 – Removing A.A.A.R. module

Removing the A.A.A.R. module from the marketplace (and of course, restarting again Pentaho) the error disappears.

Adding again the A.A.A.R. module from the marketplace (and of course, restarting again Pentaho) the error appears again.

Conclusion

I’m personally sure that this behaviour was not in the previous version of Pentaho BI server 5.4.0.0-128 and I hope the Pentaho team will solve it in the future release of the server. Until that time this post should help to avoid the issue and the conflict.

The post Sparkl error on Pentaho 5.4.0.1-130 with multiple modules appeared first on Francesco Corti.

A.A.A.R. for a long time extraction on big Alfresco repositories

$
0
0

speedometerDuring my support activities on the A.A.A.R. solution I often receive the question below. The context is a request of support about the optimization of the extraction and the compression of the time of data extraction for A.A.A.R. Below the question mentioned above.

The script ran for hours but the data extraction process didn’t complete.
In the log I always see something like this:

Cmis Input modified document.0 – Cmis Input – Retrieved n.0 results from item n.714 on a total of n.967 results.
Cmis Input modified document.0 – Cmis Input – Retrieved n.0 results from item n.714 on a total of n.967 results.
Cmis Input modified document.0 – Cmis Input – Retrieved n.0 results from item n.714 on a total of n.967 results.

Francesco, could you give me support please?

In this post I would like to face this relevant issue, describing the reasons of this behaviour and focusing on the solution (because there is a solution) to test and use A.A.A.R. with satisfaction into your Alfresco installations.

Why this issue happens?

In those cases, I usually share with the users that the extraction process is developed to retrieve by default all the available informations about audits, repository (with all the details of the data structure) and workflows. This behaviour has been a design choice to show all the possible analytics, dashboards and reports on all the available informations stored into your Alfresco’s instances. I think that this is definitely a good choice but when the Alfresco repositories contain a big (or huge) quantity of data or when the resources are not enough, this massive extraction could run for hours and hours and hours… and in some cases, this is not acceptable.

Probably you are thinking that I could better define what I mention with “big (or huge) quantity of data” and “resources are not enough”. The correct answer about those two key concepts should be: benchmarking of the A.A.A.R. solution and minimum resources required. Instead of those (relevant) concepts, I would like to focus this post on the issue of the “not acceptable duration” of the extraction process. Of course with the goal to describe how to face it and solve it.

The possibile solutions to the issue

Talking about the solutions, below are listed the main solutions I always suggest:

  1. Improve the performance of the extraction process, optimizing the “longest” tasks with a development activity.
  2. Tune the extraction using the available parameters (because A.A.A.R. has some interesting parameters for this purpose).

About the first solution, it’s my favourite one because it’s a final solution to the issue, even if we all know it requires development and effort, as consequence. About the second solution, being concrete and practical, is the easiest one even if we all have to agree that every tuning could have an impact on the analytics and results.

Description of the A.A.A.R. parameters

If the choice is not to develop an optimization of the extraction process, the suggestion is to take a look to the parameters of the AAAR_Extract script. In particular to the ones described below.

..
 # Set by writeConfiguration.kjb.
 GET_AUDIT="true"
 GET_REPOSITORY="true"
 GET_PARENTS="false"
 GET_WORKFLOWS="false"
 ...

Below a brief description of the parameters and their use.

GET_AUDIT:=’true’|’false’

With this parameter you request for the extraction of the audit data from Alfresco. If the audit analytics are not your goal or interest, set it to false value and the extraction process will be faster. By default the value is set to true.

GET_REPOSITORY:=’true’|’false’

With this parameter you request for the extraction of the repository data from Alfresco. If the repository analytics are not your goal or interest, set it to false value and the extraction process will be faster.By default the value is set to true.

GET_PARENTS:=’true’|’false’

With this parameter you request for the extraction of the repository’s structure from Alfresco (NB: it affects only the repository’s structure). If you are not interested in having analytics on the repository’s structure (example: content of a site, who are the documents stored in a folder/subfolders, ecc.), set it to false value and the extraction process will be faster. By default the value is set to true. The parameter is not used if the GET_REPOSITORY set is to false.

HINT: With a big Alfresco’s repository, structured in a huge amount of folders/subfolders, this setting could be crucial and save the 95% of time of your extraction duration. 😉

GET_WORKFLOWS:=’true’|’false’

With this parameter you request for the extraction of the workflow data from Alfresco. If the workflow analytics are not your goal or interest, set it to false value and the extraction process will be faster. By default the value is set to true.

Conclusion

In this post I share some relevant parameters of the A.A.A.R. extraction process. The parameters are relevant to tune the solution and they are crucial in some practical use cases, for example with big Alfresco’s repositories.

The post A.A.A.R. for a long time extraction on big Alfresco repositories appeared first on Francesco Corti.

slf4j conflict during AAAR_Extract execution

$
0
0

slf4jDuring my support activities on the A.A.A.R. solution, I receive few contacts reporting about the error described below. The context is the first execution of the AAAR_Extract script, immediately after the first installation.

2015/07/25 16:25:15 - Cmis Input documents before last update.0 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Unexpected error
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : java.lang.LinkageError: loader constraint violation: when resolving method "org.slf4j.impl.StaticLoggerBinder.getLoggerFactory()Lorg/slf4j/ILoggerFactory;" the class loader (instance of org/pentaho/di/core/plugins/KettleURLClassLoader) of the current class, org/slf4j/LoggerFactory, and the class loader (instance of java/net/URLClassLoader) for resolved class, org/slf4j/impl/StaticLoggerBinder, have different Class objects for the type LoggerFactory; used in the signature
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:299)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:269)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:281)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.apache.chemistry.opencmis.client.bindings.cache.impl.CacheImpl.<clinit>(CacheImpl.java:38)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.apache.chemistry.opencmis.client.bindings.impl.RepositoryInfoCache.<init>(RepositoryInfoCache.java:56)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.apache.chemistry.opencmis.client.bindings.impl.CmisBindingImpl.clearAllCaches(CmisBindingImpl.java:253)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.apache.chemistry.opencmis.client.bindings.impl.CmisBindingImpl.<init>(CmisBindingImpl.java:150)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.apache.chemistry.opencmis.client.bindings.CmisBindingFactory.createCmisAtomPubBinding(CmisBindingFactory.java:146)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.apache.chemistry.opencmis.client.runtime.CmisBindingHelper.createAtomPubBinding(CmisBindingHelper.java:98)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.apache.chemistry.opencmis.client.runtime.CmisBindingHelper.createBinding(CmisBindingHelper.java:56)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.apache.chemistry.opencmis.client.runtime.SessionFactoryImpl.getRepositories(SessionFactoryImpl.java:133)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.apache.chemistry.opencmis.client.runtime.SessionFactoryImpl.getRepositories(SessionFactoryImpl.java:112)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at it.francescocorti.kettle.cmisinput.CmisSessionFactory.getNewSession(Unknown Source)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at it.francescocorti.kettle.cmisinput.CmisSessionFactory.getSession(Unknown Source)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at it.francescocorti.kettle.cmisinput.CmisInputMeta.getSession(Unknown Source)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at it.francescocorti.kettle.cmisinput.CmisInputMeta.getFields(Unknown Source)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at it.francescocorti.kettle.cmisinput.CmisInput.processRow(Unknown Source)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2015/07/25 16:25:15 - Cmis Input documents before last update.0 - at java.lang.Thread.run(Thread.java:722)

In this post I would like to face this issue, describing the reasons of this behaviour and focusing on the solution (because there is a solution).

Description of the issue

The exception talks about a conflict of the org/slf4j/LoggerFactory class, declared in the slf4j-api-1.7.5.jar library. If you take a look into the Pentaho installation, with the A.A.A.R. onboard, you will find two instance of the library in two different paths: the first into the tomcat/lib folder and the second into the /opt/pentaho/data-integration/plugins/steps/CmisInput/ folder. The reason of the conflict is that the java process (probably becaus of the CLASSPATH) does not include the “right library” and it does not recognize the signature, causing the error.

In the link below you can find a better description of the behaviour (and the solution).
http://stackoverflow.com/questions/29504180/slf4j-error-class-loader-have-different-class-objects-for-the-type

How to solve the exception

Thank to the contribution of Fabio Benevento and Lorenzo Niccolai from Genesy (Italy), the solution is to rename the slf4j-api-1.7.5.jar into the /opt/pentaho/data-integration/plugins/steps/CmisInput/ folder.
Executing again the AAAR_Extract script, the exception will disappear with the conflict. 😉

Conclusion

In this post I share the solution to the exception described above, during the first execution of the AAAR_Extract script, immediately after the first installation. Many thanks to Fabio Benevento and Lorenzo Niccolai from Genesy (Italy) that tested and used the solution described in this post.

The post slf4j conflict during AAAR_Extract execution appeared first on Francesco Corti.

[Update] Sparkl error on Pentaho 5.4.0.1-130 with multiple modules

$
0
0

bugFew days ago I shared a post about an bug on Sparkl App builder for Pentaho 5.4 (more precisely on 5.4.0.1-130). You can read the post here. Today I would like to share an update after the support from the Webdetails Team and the Meteorite.bi Team. Thank you from all the community.

Description of the test environment

Starting from a vanilla installation of Ubuntu 14.04.02 LTS, the Oracle Java 1.7.0.79 has been installed and used below. Pentaho BI server 5.4.0.1-130 has been downloaded from Sourceforge and installed following this tutorial.

About the test 1 – Installation of the three modules

Repeating the test n.1 described here, the error is still present. After some tests, the Pentaho Dev Team suggested to take a look to the Saiku module and involved the Meteorite.bi Team.

Description of the solution

The Meteorite.bi Team solved, releasing a bugfixed version of the Saiku analytics v3.3 (Stable). The only attention to have, is to install the v3.3 (Stable) instead the default one (the v3.3-EE). Below a picture describing how to get the correct version of the module, directly in the Pentaho Marketplace.

Saiku_marketplace

Installing again the right version, the error is solved and you can use all the modules in your Pentaho environment.

Conclusion

In this post I share the solution to a conflict between Saiku module, Pivot4J Analytics module and A.A.A.R., on Sparkl App builder on Pentaho 5.4 (more precisely on 5.4.0.1-130). This solution is based on the tests described here and submitted to my attention by the community. Thank you for that and please… don’t stop doing it. 😉

The post [Update] Sparkl error on Pentaho 5.4.0.1-130 with multiple modules appeared first on Francesco Corti.

Viewing all 39 articles
Browse latest View live