Monday, 30 June 2014


Differences between ODBC and OCI

ODBC:- Open DataBase Connectivity, which is known as Universal Data Connector.
OCI:- Oracle Call Interface,Normally connects to Oracle Source.
Hence, both can connect to Oracle Sources.
ODBC :- if we are using ODBC to connect Oracle source then we need to give ODBC connection in the machine where the BI Server installed on.
-- so, the machine which have BI Server that should contain ODBC connection.
OCI is a Native Connection, with out having ODBC Connection in the machine we can import the data.
Ex:- Importing through Server.

OCI is a much better choice than ODBC for the following reasons:

* OCI is optimized for queries. Transparent prefetch buffers reduce round-trips and improve performance and scalability. As a result, there is reduced memory usage on the server.
* OCI is optimized for round-trips. No-reply requests are batched until the next call is generated for the server. This allows certain calls to be lazily propagated.
* OCI is thread safe. You do not need to mutex (use mutual exclusivity locks) any of the OCI handles. ODBC is not thread safe, so you have to mutex most data structures.
* OCI provides an asynchronous event notification API for active databases.
* OCI provides enhanced array data manipulation language (DML) operations that reduce round-trips.
* OCI returns ROWIDs for all rows selected for update in a single round-trip. Using ROWID allows for more efficient SQL access.
* ODBC has no concept of sessions. OCI decouples connections, sessions and transactions. Multiple users can use a single connection; they get serialized on the connection for SQL operations. Multiple transactions can exist per user. This allows users to scale and service more users than there are connections available. Sessions and transactions can be migrated between connections to the same server.
* ODBC does not support object types, large objects (LOBs), and other new Oracle datatypes.
* ODBC affects server scalability. Using ODBC and having n number of concurrent users forces the server to have n number of processes service the clients if Oracle8i is operating in dedicated server mode. Your operating system may or may not support so many connections and processes.
* ODBC is a wrapper around OCI so it is slower.

Setting up the ODBC Connection for OBIEE

Follow the steps below to set up the ODBC Connection.

1.The tool is found here:

2. Load odbcad32 and you’ll get this:

3. Select “Add…” and select the Oracle BI Server and click Finish.

OBIEE Development Life cycle- Administration

This is a topic that everyone is already aware. While going through the oracle documentation for OBIEE Administration, I found this well explained Article. This illustrates the process involved in a development project and depicts clearly the role of various technical personals involved.

Phase II - Branching, Fixing, and Patching


Phase III - Independent Semantic Model Development 
Find the Detailed Case Study for this example HERE

Scheduler in OBIEE11g

Hi i tried out how to schedule a report in OBIEE11g .Thought i will share the errors which i got and steps to implement the scheduler with everyone .Hope you will enjoy this post..


Like OBIEE 10g we can create a saved request / Analysis in OBIEE 11g and we can schedule the reports. But in OBIEE 11g all the tables related to scheduler services are preconfigured. When we are installing OBIEE 11g, we are running repository creation utility (RCU). RCU is creating two schema called Metadata schema (MDS) and BIPLATFORM. All the scheduler related tables were created in BIPLATFORM schema. So Enterprise Manager (EM) is taking these schemas automatically and running scheduler services in BI 11g.
Connect with BIPLATFORM schema in the data source. There we can view the scheduler related tables S_NQ_JOB, S_NQ_INSTANCE, S_NQ_JOB_PARAM, S_NQ_ERR_MSG which are created already when we were running RCU.

Configure Email settings:

1. Login to Fusion Middleware Control Enterprise manager (http://yourservername:7001:/em) using Admin user credentials

2. Navigate to Mail tab (Business Intelligence > coreapplication > Deployment>Mail

3. Click Lock and Edit Configuration to enable changes to be made.

4. Complete the elements with the following information.

• SMTP Server – SMTP server of your email (e.g.
• Port – Port of the SMTP server (e.g. 25)
• Display name of sender – Any name(e.g.Oracle Business Intelligence)
• Email address of sender – Sender’s email address (e.g.
• Username – Same as the sender’s email (e.g.
• Password – password of your email
• Confirm password – confirm the same password as above
• Number of retries upon failure – any number
• Maximum recipients
• Addressing method To, Blind Copy Recipient (Bcc) – if you want to receive a BCC, select it.

I created a gmx mail account by logging into have entered the username and password of that account here.
Note: If you search a free smtp service, the site is reported as offered a free smtp server which work without the need to upgrade to a professional account as for instance Google and Yahoo ask for.

smtp server :
port : 25

5. Click Apply, and then click Activate Changes.

6. Return to the Business Intelligence Overview page and click Restart.

Creating an Agent to send mails:

Since we are going to configure just to test the email settings we did in the above, we create this agent as simple as possible.

Login to OBIEE Web (http://yourservername:7001/analytics)
1.Click New in the Global header and then select Agent and Actionable Intelligence
2.In the General tab set the Priority as Normal


3.In the Schedule tab and then select Once from the Frequency list box.

4. In the Delivery content tab, select the content you want to deliver clicking the Browse button.

5. In the Recipients tab, add emails (you can add external users mails as well) whom do you want to receive a mail with delivery content.


6.In the Destinations tab,select the destinations which include user dashboard and email.

7. Save the agent and Run it.You will be able to run the agent only after saving it. Click the icon shown below for running the Agent.

If it runs successfully recipient get the mail with pdf attachment (we selected pdf format in the Delivery content tab).

Also it appears on the corresponding dashboard of the user.


Issues i faced while doing this:

This issue is my Mozilla was updated to Mozilla10.It doesn't support Oracle BI Presentation Services.
I downloaded and installed Google Chrome.Its working fine now.I found Google Chrome browser to be a really good browser.

This error came when i created an agent and was trying to save it.The issue is with IE9 version.
Works well in Google Chrome.Able to save the agents now.

I had given 2 recipients.The report alert was being delivered only to the dashboard of the user.The report was not delivered to the mail.The issue came because i tried to send using gmail,yahoo mail etc..
Select the SMTP server as gmx or give your company SMTP server.
Okay...guys iam going to sleep...we will deal with other OBIEE features some other day...

OBIEE 11g log files path

In this post, we are going to see where the log files in OBIEE 11g reside.
In OBIEE 10g, we can check the different log files in the installation directory. In OBIEE 11g, we have got Enterprise Manager Console, where we can check log files but we can still see these files in OBIEE install directory.

  • Enterprise Manager Console
Log on to EM Console (http://localhost:7001/em) . Once you logged on, in the left pane navigate to Farm_bifoundation_domain –> Business Intelligence–>coreapplications–>Diagnostics–>Log Messages
Here you can see the Most Recent Errors and Most Recent Warnings. After warning messages,  you can also see all the log files grouped by categories using Log Viewer.
Available files:
• Presentation Services Log
• Server Log
• Scheduler Log
• JavaHost Log
• Cluster Controller Log
• Action Services Log
• Security Services Log
• Administrator Services Log

  • Manually open log files from your install directory
In this section will see these entire available log file and their paths in the hard disc. The log files for OBIEE components are found under /instances/instance1/diagnostics/logs (eg: C:\OBI\instances\instance2\diagnostics\logs)
These are the different type of log files:
Presentation Server log files
Files: sawlog0.log, webcatupgrade0.log
BI Server log files
Files: nqquery.log, nqserver.log, servername_NQSAdminTool.log, servername_nQUDMLExec.log, servername_obieerpdmigrateutil.log
Scheduler log
Files: nqscheduler.log
Cluster Controller log
Files: nqcluster.log
ODBC log
OPMN log
Files: debug.log, logquery~NUMBER.log, opmn.log, opmn.out, service.log
Installation logs

Use of Log Viewer: You can search for and view the log entries for Oracle Business Intelligence components using Fusion Middleware Control Log Viewer. The log files can be searched for log messages, and you can apply filters that can, for example, target a certain date range, user, user transaction, or level of message (error, warning, notification, and so on). You can also view log files in their entirety from the Fusion Middleware Control Log Viewer

OBIEE 11g Security : Row-Level Security


So we’ll start this week’s focus on OBIEE 11g security with the topic that most people associate with “security” around reports – row-level security. Row-level security is where groups of users have access to a set of reports, but they all see different sets of data within the reports due to filters being applied silently in the background. As a report author you can of course apply filters to an analysis, but in this case I think what people generally associate row-level security with is filters that are applied automatically, according to a set of rules, without the user being aware of it or even filters being visible when you add a filter view to the analysis’ compound layout.
In the two analysis screenshots below, for example, one user sees one set of stores when they view a particular report, whereas another user viewing the same report gets to see a completely different set of stores, but no filters have been explicitly added to the analysis – instead, some other process is applying row-level security to the analysis’ data in the background, automatically.
The advantage of row-level security, particularly if it’s applied automatically by the BI Server in the background, is that you can roll-out generic reports across all of your users, but each one will provide a customised view showing just the data, or “rows”, that apply to that user, even if the report designer forgets to add filters to the analysis themselves to restrict user’s views of data to the correct region, in this case.
So where can you set up row-level security in OBIEE, so that this background row-level filtering takes place? In general, you would set up row-level filtering like this using two repository features, together:
  1. First, you would define what’s called a “session variable” to hold the region name that the user is allow to see, and then
  2. You would define a filter, using the Identity Manager dialog, to filter rows against this region setting.
You can also apply row-level filtering at the database level, by using what’s called Virtual Private Databases, and then enabling this in the physical database settings, but that’s a topic for another day. In addition, of course the developers of individual analyses and reports can add filters to the report definition, which again can make reference to session or other variables. But for now, let’s look at the most common way to set up background row-level filtering and security.
Assuming that we’ve got two tables stored in a database somewhere, that list out users and the region they are assigned to, like this:
The steps to set up security to restrict users to a single region’s worth of data would look like this:
  1. Before we do anything, we’ll need to create a new, dedicated connection pool definition within the physical database settings that connects us to the database schema that holds these tables. This is because, as from OBIEE 11g, by default connection pools that are used for general database queries can’t be used for initialisation blocks, and you’ll get a validation warning if you try and do this. Instead, using the Oracle BI Administrator tool and with the repository open online, create a new connection pool within the database that holds these tables, enter the connection details, and give it a name such as init_blocks_cp.
  2. Now, we’ll start by defining the session variable that will hold the region name that the user is allowed to see. Still using the Oracle BI Administrator tool and with the repository open online, select Manage > Variables… to open the Variable Manager dialog.
  3. Within the navigation tree on the left-hand side of the Variable Manager dialog, click on the Session > Variables node, then right-click in the area on the right and select New Session Variable…
  4. With the Session Variable dialog open, type in REGION as the Name, and then press the New… button next to the Initialization Block: area, which currently has not assigned as its setting.
  5. The Session Variable Initialization Block dialog will then be displayed. Enter GET_REGION_SESS_VAR as the name, and then press the Edit Data Source button to bring up the Session Variable Initialization Block Data Source dialog.
  6. Using this dialog, select Database as the Data Source Type, select the connection pool that you defined earlier, and then select the Default initialisation string radio button. Then, type in the SQL that returns the region for a given user, using the :USER substitution variable, like this: select r.region from gcbc_hr.staff_logins l, gcbc_hr.staff_regions rwhere l.login_name = r.login_nameand l.login_name = ‘:USER’
    Press OK to close the dialog, and the Session Variable Initialization Block dialog should look as in the screenshot below:
    Sshot 14
  7. Using the Oracle BI Administrator tool and with the repository open online, select Manage > Identity… from the application menu.
  8. The Identity Manager dialog will then be displayed. Click on the Application Roles tab on the right-hand side, and then then double-click on the role you wish to assign the row-level security settings to; in this case, BIConsumer, to apply the rule to all users of the system.
  9. The Application Role dialog will then display. Press the Permissions… button to open the User/Application Role Permissions dialog.
  10. With the User/Application Role Permissions dialog open, press the Add button to bring up the Browse dialog. Using the Browse dialog, select the table you wish to apply the filter to. In this instance, there are two ways we can set up the filter. The Region column that corresponds to the filter is found in the Dim Stores table, which means that if we want to apply the filter when any column from this table is included in an analysis, we’d double-click on this particular table to select it for the filter condition. If, however, we wanted the filter to be applied even if no column from the Dim Stores table was selected, we’d double-click on the associated fact table Fact Sales instead, which would apply the filter regardless of what attribute columns were selected for the analysis criteria.
    Note also that you can place the filter on either a presentation table, or a business model table. If you choose the former, and you’ve got several subject areas based off of the same business model, only the table from that particular presentation layer subject area will be subject to the filter. If, however, you click on the Business Model tab instead and then select a business model table, every subject area that contains presentation tables derived from that business model table will have the filter applied.
    Sshot 11
  11. After double-clicking on a table, you’re returned back to the User/Application Role Permissions dialog, to define the actual filter expression. Sshot 12
    Click in the Data Filter area and then press the Edit Expression button to bring up the Expression Builder – Data Filter dialog.
  12. Using this dialog, set the following filter, which references the session variable you defined earlier using the VALUEOF(NQ_SESSION.variablename) syntax: “Sales”.”Dim Stores”.”Region Name” = VALUEOF(NQ_SESSION.REGION)
    Press OK to close the dialog, then keep pressing OK with the other dialogs to eventually return back to the Identity Manager dialog. To close that final dialog, select Action > Close.
Now, when users log in that have entries in this table, you should see that queries that reference either the Dim Stores table (in this case), or if you’ve set it up like this, any query against the fact table, will have the required restriction applied to the rows of data returned. If you don’t see this restriction happening, check the nqserver.log file for an error message like:

[2012-03-08T00:16:24.000+00:00] [OracleBIServerComponent] [ERROR:1] 
[] [] [ecid: 3f3d2d8955322f32:1843bf05:135ee2cec8c:-8000-00000000000008c3] [tid: 10f4]  [nQSError: 17010] SQL statement preparation failed. [[[nQSError: 17001] Oracle Error code: 904, message: ORA-00904: "R"."LOGIN_NAME": invalid identifier at OCI call OCIStmtExecute.[nQSError: 13011] Query for Initialization Block 'GET_REGION_SESS_VAR' has failed.

which indicates that there’s an error in your SQL, and the session variable value couldn’t be set.
So what happens if you log in as the weblogic or biadmin user, for example, who doesn’t have an entry in this table, or indeed a regular user that’s not in the table? If you try it out, you’ll see that in these cases, administration users get all values returned, whereas regular users get an error message saying that the variable hasn’t been populated, which probably is the behaviour that you wanted.
The other obvious question is how you would go about assigning more than one group to a particular user. For example, if you wanted the sfranceso user to be able to see all of the San Francisco regions, not just North SF, you could put entries in the STAFF_REGIONS table like this:
Sshot 15
But the SQL in the variable definition that we defined earlier is expecting a single value, not a set of values. The way that you in fact deal with this is to create what’s called a “row-wise initialization” variable that can contain multiple values – internally, it still holds a single value, but this consists of all the values you load in, separated by semi-colons.
To define a row-wize initialised variable, follow these steps:
  1. Using the Oracle BI Administration tool and the repository online, select Manage > Variables… as before. This time though, do not create the variable first; instead, click on the Session > Initialization Blocks entry in the Variable Manager navigation tree, then right-cick on the right-hand panel and select New Initialization Block…
  2. With row-wise variables, you define the SQL query, and the variable, at the same time. Using the Session Variable Initialization Block dialog, type in a name for the initialisation block, for example GET_REGIONS_ROWWISE_SESS_VAR, and then press the Edit Data Source… button to bring up the Session Variable Initialization Block Data Source dialog. As before, select Database as the Data Source Type, select the Default initialization string radio button and select the connection pool as before, but then when you come to type in the SQL you define the variable at the same time, like this:
    select ‘REGIONS’, r.region
    from gcbc_hr.staff_logins l, gcbc_hr.staff_regions r
    where l.login_name = r.login_name
    and l.login_name = ‘:USER’
    with the literal values ‘REGIONS’, in this case, defining the row-wise variables as part of the initialisation block definition.
    Press OK to close this dialog and return to the Session Variable Initialization Block dialog.
  3. In the previous example, you’d see the variable name listed in the Variable Target area. To assign this initialisation block to the row-wise variable we’ve just defined in the SQL statement, press the Edit Data Target… button, and then when the Session Variable Initialization Block Variable Target dialog is shown, check the Row-wise initialisation checkbox. Your final Session Variable Initialization Block dialog should now look like this: Sshot 17
  4. Now, to reference the row-wise session variable in a filter, for example in the mandatory filter we applied to the BIConsumer role, you’d reference it in the same way as with a regular variable: “Sales”.”Dim Stores”.”Region Name” = VALUEOF(NQ_SESSION.”REGIONS”)
    Then, you should see any users that have more than one region assigned, getting the correct values in their analyses:
    Sshot 18
So that’s the basics of how row-level security works in OBIEE – you define a session variable using an initialisation block, which then gets used to create a data filter “behind the scenes” using the Identity Manager. There’s a few variants around this – you might use Virtual Private Database, for example, to enforce the row-level security at the database level – but this is the most common approach.
One other row-level filtering approach that you might sometimes come across though is where the filter is applied at the logical table source level. If you open up the repository and double-click on a logical table source within the Business Model and Mapping layer, then click on the Content tab, you’ll that one of the sections lets you type in an SQL WHERE clause to limit the rows returned, like this:
Sshot 19
By default, and in most cases, this section is usually empty for most logical table sources, but you could use it to enforce row-level security for all users, without reference to a particular user or role, like this:
Sshot 20
I personally don’t tend to use this type of approach to row-level security, as it’s applied across all users regardless of roles etc, and also it’s not so obvious to find, hidden away in a logical table source definition. But you ought to be aware of how it’s done, as its used extensively, for example, by Oracle with the BI Applications repository model.


Parent Child Hierarchy in OBIEE 11G

Step-By-Step Example to implement Parent-Child Hierarchy :

There are different types of hierarchy which exists in an Organization data which gives more insight for our analysis.

Ragged Hierarchy :

A hierarchy in which all the lowest-level members do not have the same depth. For example, a Time hierarchy might have data for the current month at the day level, the previous month’s data at the month level, and the previous 5 years’ data at the quarter level. This type of hierarchy is also known as an unbalanced hierarchy.

Skip-level Hierarchy :

A hierarchy in which certain members do not have values for certain higher levels. For example, in the United States, the city of Washington in the District of Columbia does not belong to a state. The expectation is that users can still navigate from the country level (United States) to Washington and below without the need for a state.

Parent-Child Hierarchy :

Consists of values that define the hierarchy in a parent-child relationship and does not contain named levels. For example, an Employee hierarchy might have no levels, but instead have names of employees who are managed by other employees.

A parent-child hierarchy is a hierarchy of members that all have the same type. This contrasts with level-based hierarchies, where members of the same type occur only at a single level of the hierarchy. The most common real-life occurrence of a parent-child hierarchy is an organizational reporting hierarchy chart, where the following all apply:

• Each individual in the organization is an employee.

• Each employee, apart from the top-level managers, reports to a single manager.

• The reporting hierarchy has many levels.

In relational tables, the relationships between different members in a parent-child hierarchy are implicitly defined by the identifier key values in the associated base table. However, for each Oracle BI Server parent-child hierarchy defined on a relational table, you must also explicitly define the inter-member relationships in a separate parent-child relationship table.

Let us start with Parent-Child Hierarchy and see how to implement it .

Am taking Employee table to work around . Click here to download .sql file for Employee table DDL and Data .

Step1 : Create a blank repository and import the Employee table in to Physical Layer .

Step2: Create Alias on EMPLOYEE table 1.’EmployeeDim’ and 2.’SalaryFact’ and give physical join between EmployeeDim to SalaryFact .

Step3: Drag the tables in to BMM layer and give aggregations for the fact columns.

Step4: Now right click on Employees logical table and choose for new parent child hierarchy .

Step5: Choose the member key (by default it will take the primary key . Here Employee Number) and parent column as shown in the below screenshot.

Step6: Click on ‘parent- child settings’ .This is the place where we are going to generate the DDL & DML scripts which we can use to create and populate the hierarchy table which will be used by BI server to report parent child hierarchies. Here click on ‘Create Parent-Child Relationship Table’ .

Step7: Enter the DDL&DML script names and click Next .

Step7: Give name for the Parent Child hierarchy table and Click Next .

Step8: You can see both DDL and Script to populate data here .

Click Finish .

You can see the screen as follows .

Click Ok again Ok .

After finishing the wizard you can see the HierarchyTable got imported automatically.

Right click on the EMPLOYEE_HIERARCHY table –> click on update rowcount and observe that you will get ‘Table does not exist error’

Step9:Go to the path <beahome>\instances\instance1\bifoundation\OracleBIServerComponent\coreapplication_obis1\repository. Run the scripts ‘EMPLOYEE_PARENT_CHILD_DDL.sql’ and ‘EMPLOYEE_PARENT_CHILD_DATA.sql’ .(My case I used SQL Developer to Run this scripts)



Script For polulate Data :


v_max_depth integer;

v_stmt varchar2(32000);

i integer;


select max(level) into v_max_depth


connect by prior EMP_NO=MANAGER_ID

start with MANAGER_ID is null;


|| ‘select EMP_NO as member_key, null, null, 0 from EMPLOYEE where MANAGER_ID is null’ || chr(10)

|| ‘union all’ || chr(10)

|| ‘select’ || chr(10)

|| ‘ member_key,’ || chr(10)

|| ‘ replace(replace(ancestor_key, ”\p”, ”|”), ”\”, ”\”) as ancestor_key,’ || chr(10)

|| ‘ case when depth is null then 0′ || chr(10)

|| ‘ else max(depth) over (partition by member_key) – depth + 1′ || chr(10)

|| ‘ end as distance,’ || chr(10)

|| ‘ is_leaf’ || chr(10)

|| ‘from’ || chr(10)

|| ‘(‘ || chr(10)

|| ‘ select’ || chr(10)

|| ‘ member_key,’ || chr(10)

|| ‘ depth,’ || chr(10)

|| ‘ case’ || chr(10)

|| ‘ when depth is null then ”” || member_key’ || chr(10)

|| ‘ when instr(hier_path, ”|”, 1, depth + 1) = 0 then null’ || chr(10)

|| ‘ else substr(hier_path, instr(hier_path, ”|”, 1, depth) + 1, instr(hier_path, ”|”, 1, depth + 1) – instr(hier_path, ”|”, 1, depth) – 1)’ || chr(10)

|| ‘ end ancestor_key,’ || chr(10)

|| ‘ is_leaf’ || chr(10)

|| ‘ from’ || chr(10)

|| ‘ (‘ || chr(10)

|| ‘ select EMP_NO as member_key, MANAGER_ID as ancestor_key, sys_connect_by_path(replace(replace(EMP_NO, ”\”, ”\”), ”|”, ”\p”), ”|”) as hier_path,’ || chr(10)

|| ‘ case when EMP_NO in (select MANAGER_ID from EMPLOYEE ) then 0 else 1 end as IS_LEAF’ || chr(10)

|| ‘ from EMPLOYEE ‘ || chr(10)

|| ‘ connect by prior EMP_NO = MANAGER_ID ‘ || chr(10)

|| ‘ start with MANAGER_ID is null’ || chr(10)

|| ‘ ),’ || chr(10)

|| ‘ (‘ || chr(10)

|| ‘ select null as depth from dual’ || chr(10);

for i in 1..v_max_depth – 1 loop

v_stmt := v_stmt || ‘ union all select ‘ || i || ‘ from dual’ || chr(10);

end loop;

v_stmt := v_stmt || ‘ )’ || chr(10)

|| ‘)’ || chr(10)

|| ‘where ancestor_key is not null’ || chr(10);

execute immediate v_stmt;



Click on COMMIT to commit changes .

Right click on the EMPLOYEE_HIERARCHY table –> click on update rowcount and observe that you will not get ‘Table does not exist error’ .

Step10: Pull the ‘ParentChildHierarchy’ into Presentation Layer ,Check Consistency and save the repository .Now we are ready to Build reports .

Click Here to download Repository (Repository Password : Admin123)

Step11: Go to answers create analysis including Employee Hierarchy . Thats it…


OBIEE 10g to 11g Upgrade

                          Presently i am working in an upgrade project .I searched a lot for articles related to upgrade in the internet.But i couldn't find anyone who could tell me correctly where to find the related articles.So iam using this post to tell you where exactly you can find the documents related to upgrade. I will be happy if this will be of some use.

1.Oracle Upgrade Document:

This document gives a fair idea about how to upgrade a 10g rpd and webcat to 11g.

2.See below the frequently seen differences in the 11g reports compared to its 10g version after upgrade.
These differences have to be fixed manually after upgrade.I will be updating the steps for fixing the bugs in my next post.Also i will be updating the steps for fixing data mismatch in the reports and the bugs which Oracle Support has classified as "Cannot be fixed".

Additional Criteria columns added
After upgrade, additional criteria columns might be added if there are chart interactions defined in 10g charts. You might see a column that is repeated multiple times. Chart interactions are migrated to the Interaction tab of the Column Properties dialog for columns within the Criteria tab. To avoid affecting other views, the upgrade process adds new columns to the criteria for each chart that includes chart-level interactions.
Additional popup menu for analyses
In 11g, you might notice a popup menu for analyses that supports multiple links for navigation. You can turn off the menu in the Interaction tab of the Column Properties dialog if you have only a single link.
Aggregate rule for running aggregates must be in Answers Reports after upgrade
If a column formula includes running aggregates (such as, MAVG(), MSUM(), RSUM(), RCOUNT(), RMAX(), RMIN()) and views include sub-totals, grand totals, or other columns in the Excluded columns section, then the data is displayed differently in the results in 11g. This difference occurs because the aggregation rule is not specified for this column. Ensure that you determine the correct aggregation rules for the columns. If you want to see the same values as in 10g, then include the following aggregation rules in the Edit Column Formula dialog:
  • MAVG(), MSUM() = Aggregation Rule: Average
  • RSUM(), RCOUNT() = Aggregation Rule: Sum
  • RMAX() = Aggregation Rule: Max
  • RMIN() -> Aggregation Rule: Min
Report-based Aggregate
When using report-based aggregates, you might obtain different results in 10g and 11g. You might see the different results in 11g when the following are true:
  • When a column has an aggregation rule of "Server Complex Aggregate".
  • When a pivot table includes sub-totals, grand totals, or other columns in the Excluded columns section.
  • When a column has Report-based Total (when applicable) checked.
The difference occurs because the data for this column for sub-totals was produced, for example, by the aggregate() function in 10g. In 11g, this issue is resolved by using the report_aggregate() function to accurately reflect the Report-based Total (when applicable) setting.
Attribute column in measure section might be repeated in a pivot table
In 10g, if you have an attribute column on the row edge and in the measure section, the column is displayed blank. In 11g, the column shows the exact value of the attribute; therefore you might see repeated values.
Calculated items added to all the views
Calculated items are generic in 11g and added to all views. In 10g, only pivot tables have calculated items. If you upgrade 10g analyses that include calculated items, then after upgrade, the calculated items are added to all the views.
Conditional formatting enhancement
In 11g, conditional formatting that is added in criteria applies to both tables and pivot tables. In 10g, conditional formats based on another column did not apply to pivot tables.
Data formatting might change in 11g
In 11g, data formatting in some analyses might be different than the data formatting in 10g. For example, if an analysis for 10g has two decimals, then you might not see those two decimals when the analysis is upgraded to 11g.
In 11g, the system attempts to honor analysis-level or view-level data formats. However, in cases where no data format has been specified, the system relies on the default behavior of the graphing engine. This reliance might create differences in formatting between analyses in 10g and 11g.
Default number of pivot table rows has changed
In 11g, pivot tables have a default of 25 display rows. You can change this number using the DefaultRowsDisplayed setting in the instanceconfig.xml file. See Chapter 18 of Oracle Fusion Middleware System Administrator's Guide for Oracle Business Intelligence Enterprise Edition for information.
Font weight and alignment issues
In 11g, there are changes in font weight and alignment that result from changes in default style sheets and skins.
Grand total is not highlighted with color
The default formatting for grand totals is different in 11g, and this difference might cause missing color highlighting for the grand total row.
Gray cell borders not kept
In 11g, there is a change of borders from bottom/right to top/left for hierarchical columns. You can reset the borders manually using analysis definitions.
Hidden but included data is not displayed
In 10g, if a column is hidden but included in a pivot table, the data is displayed in the pivot table. In 11g, if the column is hidden at the criteria level, then the data is not displayed.
Hidden columns used for labels are not displayed
Hidden columns used for labels in 10g are not displayed in 11g. If you have a column that is used as the label for a graph, but the column is hidden from the graph, then in 11g, the labels are not displayed.
Incorrect formatting while using reserved keywords
In 11g, reserved keywords, for example, CASE, WHEN, SELECT, PERCENT, must be double quoted. Otherwise, you might see in incorrect formatting.
Missing view in 11g
In 11g, the query does not run if there are no data views in the analysis. In 10g, the query runs and displays a No Results view if there were no results. This difference might cause a missing view in 11g.
Navigations, drills, or action links might result in additional filters
In 11g, extra filters might be displayed while navigating, drilling, or clicking action links. The value of the item clicked and all the values to the left of the item clicked are passed, including the values for any columns that are set to repeat. This could result in additional filters being applied. However, only the value of the item clicked and values of items to the left are passed in 10g.
No Results message displayed
In 10g, if there is a column selector and the first column in the column selector does not return any results, then the column selector view is displayed, which allows you to select other columns. In 11g, a No Results message is displayed and you cannot see the other columns. See the "Analysis Properties Dialog" topic in Oracle Fusion Middleware User's Guide for Oracle Business Intelligence Enterprise Edition for information on setting the No Results message.
Oracle BI Server returns all month and day names as three-letter abbreviations.
The BI Server returns all month and day names as three lettered abbreviations. To use full names, modify the NQSConfig.ini file to specify YES for the following values:
After this change, any analysis that uses a CASE statement with month names or week names must match the statements to be either long name or short name according to the setting in the NQSConfig.ini file.
Possible data difference with time-series data
In 11g, time-series functions, such as ToDate and Ago, do not automatically add hidden key columns to the analysis as in 10g. This might cause possible data differences in 11g.
Report_count does not support distinct keyword
In 11g, Report_count does not support distinct keywords. Therefore, analyses must use count (distinct) instead of countdistinct. This requirement affects the data when you select to use TopN or BottomN in the filter for a measure.
Integer division returns integer in 11g
Integer division that returns double point data in 10g returns an integer in 11g. You might notice these return values for a column in the following cases:
  • The results of the analysis lose the decimal points.
  • Data order is slightly different for the column. Because data is sorted without decimal points, some rows might contain the same values.
  • Formatting of the column is changed because the default formatting for integer (if configured) is used. (In 10g, the default format is used for double data types.)
To force integer division to return double point results, cast the numerator to a double point data type before the division. For example:
“int x” / “int y” = “int z”
Cast( “int x” as float) / “int y” = “float z”
Cast( “int x” as double) / “int y” = “double z”
If this is different behavior than you saw in 10g, then it is because a known issue caused integer division to incorrectly produce a double point result when certain physical data sources where used.
Integer data types now double data types
You can override the default data format for columns that were integer data types in 10g, and are now double data types in 11g. Without this workaround, the data for this column displays as a decimal number with two digits to the right of the decimal point.
You might notice this change for a column in the following cases:
  • The results of the analysis show decimal points where integers were shown in 10g.
  • Formatting of this column is changed because the default formatting for double (if configured) is used in 11g.
You can using the following options to maintain the same result as 10g:
  • Use the Cast function to cast the values to the appropriate data type in the metadata repository.
  • Change the formatting for the column to display only integers and save that specification as the default format.
Return data from certain column might be different
In some situations, the return data type of certain columns in 11g can be different from 10g. In 10g, some data sources, such as SQL Server and IBM DB2, return an integer for division formulas such as AVG, while other data sources return a double. In 11g, the results of all division formulas are promoted to a double type regardless of the data source, for consistency and for a more correct and precise value.
This difference might impact analysis results because some analyses from previous releases might not be formatted properly for decimal points. If this occurs, then override the default data format for columns that were integer data types in the previous release, but that are now double data types. If you do not perform this step, then the data for affected columns is displayed as a decimal number with two digits to the right of the decimal point.
See the description for the "Integer data types now double data types" change for details on how you might notice this change and on how you can obtain the same results as in 10g.
Rows might be missing in pivot tables
In 10g, rows without any measure data are displayed. In 11g, these blank rows are suppressed. As a result, you might notice missing rows.
Column sort is discarded
If there are two columns with a sort by the first column and then sorted by the second column, and you choose to exclude the first column in the view, in 10g, the first sort is still respected. In 11g, the first sort is discarded and the data is sorted only by the secondary column included in the view.
Sort order might be different
Some 11g reports might have a different sort order than those same reports in 10g. For example, the default sorting for pivot tables in 11g is from left to right. In 10g, pivot tables have no default sort.
Measure-based sorts are not implemented completely for pivot tables and graphs in 11g. Graphs do not provide user-interface elements to emulate measure-based sorts. This is a restriction in 11g, and you can use workarounds to resolve sorting issues.
Axis label ranges changed
The ranges for the numeric axis labels in graphs have changed from 10g to 11g due to a different automatic axis range calculation engine.
Data different in bar graph- services dashboard
For 11g, the data format has been enhanced to show the differences between double and integer data types. You can resolve this issue manually by overriding the default data format for columns that were integer data types in 10g and now double data types in 11g.
Different axis value in 11g
In 10g, graphs do not always honor criteria-level formats or other global data formats for columns. Data labels and numeric axis labels do not consistently follow this formatting. This issue has been addressed in 11g.
Drill-down on a graph might show different results
During upgrade, any existing 10g interactions are placed on the measures and are no longer available in an axis or a legend. To invoke the action, you must click the measures in the graph rather than the axis labels or legend. You can add action links directly to a column placed on an axis or legend by adding an action link to the column within the criteria, after which the added action links are displayed on an axis label or legend.
Graphing engine is not responding
In 11g, the default value of the graph data that is sent by the JavaHost to Presentation Services is 4 MB. If you have a graph with a large size, then you might see a message that the graphing engine is not responding. To work around this issue, increase the graph data size in the instanceconfig.xml file. The following example shows how to increase the graph data size to 6 MB. See Oracle Fusion Middleware System Administrator's Guide for Oracle Business Intelligence Enterprise Edition for information on editing the configuration file.
Graphs might have missing labels
Some axis labels (on both numeric and category axes) might be skipped as a result of the automatic label layout algorithm in use for 11g.
Graph labels for Y-Axis cannot be rotated
You cannot rotate graph labels for the y-axis other than 0-90 or -90. You cannot perform 45-degree rotations.
Behavior of grid lines in area graphs
In 11g, grid lines are drawn on top of the area markers in an area graph; that is, the grid lines are visible on top of the plotted area. In 10g, the grid lines are not drawn over the area markers.
Line graphs are stacked
Stacked line-bar graphs in which 2 or more measures are shown as lines on the same axis and are not stacked in 10g are stacked in 11g.
Some measures rendered as lines are now bars
In 10g, some measures are randomly selected to be displayed as lines instead of bars for line-bar graphs. In 11g, the plotting of the measures depends upon the graph definition in the analysis and is respected. If the measures are defined to be displayed as bars, then they are displayed as bars.
Missing unknown column in a graph
10g adds an unknown column to a graph whenever the graph definition was not completed by the columns currently in the layout. This is fixed in 11g so you might see a missing column in the graph. No additional columns are included in the layout, and you see a message box instead. During the upgrade from 10g to 11g, all such unknown columns are removed from the analysis because they are considered to be invalid; that is, not present in the query for the analysis or in the XML file for the criteria.
Multiple pie graphs in 11g for single pie graph in 10g
Oracle BI EE 10g does not support multiple pie graphs; however, 11g supports pie graphs for all columns. This enhancement might result in multiple pie graphs after upgrade.
Navigations in graphs have changed
11g graph navigations have changed. If you have navigations on the axis labels or legends in 10g, then they are now moved to the criteria level and are therefore not available.
Negative pie graph values not rendered
In 10g, pie graphs display absolute values, including negative values. Negative values are interpreted as positive values and those slices are displayed. In 11g, slices are not displayed for negative values. When all the values are negative, the graph is not displayed. In 11g, the legend is displayed for negative values.
Pie graph has legend with a "mini" pie graph
When you select to use a graph in a legend that reduces the size of the graph to be too small, 10g does not show the entire graph. However, in 11g, the engine attempts to display graphs in the smallest of spaces. The layout algorithm tries to allocate the maximum area possible to the graph. For legends with too many items, scroll bars are included to avoid compromising the area allocated to the graph.
Right-side scale might be missing from graphs
In 11g, the graphing engine maps the Y2 axis in a line-bar graph to a line. Therefore, even though axes are not synced, the Y2 axis cannot be shown because there is no data for a line.
Possible duplicate navigations
In 11g, action links are generic and upgraded to criteria action links for measures. As a result, there might be duplicate navigations. In addition, a view might be pointing to a non-existent analysis, which results in a "Path Not Found" error.
For example, suppose that you have a 10g report with two hidden graphs, and each has an action link on it. When the report is upgraded, all graph action links are upgraded to criteria action links for measures, which results in additional action links in other views. In this example the action links in the original graph point to non-existent analyses. To work around this situation, manually remove such action links.
Possible mismatch between legends and graphs in 11g
When a stacked bar graph is upgraded from 10g to 11g, the order or position of the series might change. However, the legend view is upgraded without any change. This might cause a mismatch between the legend that is displayed in the legend view and the color that is displayed in the graph. To resolve this, either change the color in the graph or update the legend to match the color in the graph.
In addition, the stacking order in the bar graph changes when you include a column in Vary Color By. For other cases, the order and coloring is maintained. The legend is incorrect or mismatched when you specify conditional formatting on the column in Vary Color By.