Wednesday, January 23, 2013

TM1 Dimension properties

You guys might be interested to know the answers for following questions.

1) How to find how many elements a dimension has in TM1?
2) How much memory is occupied by each dimension in TM1?
3) How many subsets are present in each dimension in TM1?

the answer to above questions can be obtained by following image.



Thx-
Shyam Gohil

Friday, November 2, 2012

Cognos TM1 Turbo Integrator Interview Questions

HI All,

Please find the interview questions for Cognos TM1 Turbo Integrator.

1)      How to run Turbo Integrator Process from Command Line
a.       TM1RunTI is a command line interface tool that is used to initiate TI Processes.
2)      Which function can be used to serialize the TI processes
a.       Synchronized() function can serialize TI processes so they can be processed sequentially
3)      What are the data sources available with TI
a.       Comma delimited text files including ACII files
b.      Relational databases using ODBC Connectivity
c.       Other Cubes and Views
d.      Microsoft Analysis Services
e.      SAP via RFC
f.        IBM Cognos Packages
4)      What is the string length limit in Turbo Integrator
a.       8000 single-byte characters. If the string length is greater than 8000 bytes then it gets truncated.
5)      What options are available after importing data using TI
a.       Create Cube and populate data
b.      Re-create Cube. It destroys the existing cube definitions and overwrites it
c.       Create and Update dimensions
6)      What are sub-tabs in advance tab of TI
a.       Prolog
b.      Metadata
c.       Data
d.      Epilog
7)      What is Prolog tab
a.       Prolog: Procedure is executed before the data source for the TI is opened. If the data source for the process is none then TI directly goes to Epilog. If there is no data source then MetaData and data tabs are ignored.
8)      What is Metadata
a.       A series of statements that update or create cubes, dimensions and other metadata structures during processing
9)      What is Data Tab
a.       A series of statements that manipulate values for each record in the data source
10)   What is Epilog Tab
a.       A series of statement to be executed after the data source is processed
11)   What is TM1 Package Connector
a.       It helps to import data from packages/dimensions and Custom Queries.
12)   What is Bulk Load mode
a.       Bulk Load mode enables TM1 to run in a special optimized Single-user mode. This mode can maximize performance for dedicated tasks during time when there is no load on server or night time.
b.      Bulk Load mode doesn’t display a msg to end-user to alert them. No new connections can be created.
13)   What happens when Bulk Load mode starts
a.       All processing by other threads is paused
b.      Any existing user threads and running chores will be suspended
c.       All scheduled chores will be deactivated
d.      All system-specific threads and TM1 Top connections will be suspended
14)   Ending Bulk Load mode
a.       All system and User threads will be resumed and user logins will be allowed
15)   Which functions are called for enabling and disabling the bulk load mode
a.       EnableBulkLoadMode() to enable and DisableBulkLoadMode() to disable.
16)   How to enable Bulk Load Mode in TI
a.       Bulk Load mode can be enabled in Prolog or Epilog section of TI.
b.      It is recommended that Bulk Load Mode should be enabled in Prolog Section
17)   How Synchronized() helps TI Processing
a.       TI processes are executed in parallel. In some apps, TI Processes should be serially executed in order to improve performance efficiency. With Serialized() function TI is forced to run the scripts in sequence.
18)   What are the shortfalls of Chore Start Time
a.       TM1 executes chores in GMT standards. TM1 doesn’t have any auto mechanism to accommodate DayLight Saving. Chores schedule time should be edited when DayLight time starts and ends.
19)   What is Chore Commit property
a.       It allows you to specify if the processes in chores will be committed as a single transaction or if the process in the chore are committed as multiple transaction
b.      Single Commit Mode: All processes are committed as a single transaction. This is default by nature.
c.       Multiple Commit Mode: Any processes that need to be committed do so as they are processed.
d.      Chore Property can only changed when chore is INACTIVE
20)   What are the different procedures within TI
a.       Defining Data Source
b.      Setting Variables
c.       Mapping Data
d.      Editing Advanced Scripting
e.      Scheduling the completed Process

 Thx,
C1PH3R

Thursday, November 1, 2012

Cognos TM1 Rules interview questions

Cognos TM1: RULES interview questions.

NOTE: ANSWERS ARE WRITTEN IN POINTS.

1)      What is SparseCube
a.       A cube in which number of populated cells as a percentage of total cells is too low
2)      What is Sparse Consolidation Algorithm
a.       This algorithm skips over cells that contain Zero or are empty. This algorithm speeds up consolidation calculation in cubes that are highly sparse.
b.      When consolidating data in cubes that have rules defined, TM1 turns off this sparse consolidation algorithms because one or more empty cells may be calculated by rules.
c.       Skipping rules-calculated cells will cause consolidated totals to be incorrect.
d.      When sparse consolidation algorithm is turned off, every cell is checked for value during consolidation.
3)      What is the logic of Sparsity in cubes
a.       On average, the more dimensions a cube has, the greater the degree of sparsity.
4)      What is Over Feeding
a.        Defining feeders for for consolidated cells. (Feeding a consolidated cell automatically feeds all children of the consolidation.)
5)      What is Under Feeding
a.       Failing to feed cells that contain rules-derived values. This always results in incorrect values and must be avoided at all costs.
6)      How does SKIPCHECK helps TM1
a.       SKIPCHECK forces TM1 to use the Sparse Consolidation algorithm in all cases.
7)      What are FEEDERS
a.       It creates a placeholder on cells so that cells will not be skipped during the consolidation
8)      What is FEEDSTRING
a.       If rule defines string values for any cells, then FEEDSTRINGS must be inserted as the first line of rule.
b.      FEEDSTRINGS declaration ensures that cells containing rules-derived strings are fed.
c.       Every calculation statement in a rule should have a corresponding feeder statement.
9)      What are simple FEEDERS
a.       FEEDERS that are applied to calculation within one dimension of one cube.
10)   How is the FEEDER Statement written when feeding one cube from another
a.       Calculation statement resides in the target cube, but the FEEDER statement should reside in the source cube. 
b.       The Feeder is basically the inverse of the calculation statement in the Target Cube that requires the feeder.
11)      How to Troubleshoot the FEEDERS
a.       Use Rules Tracer to assist in the development and debugging of rules.
b.      Rules Tracer functionality is available in Cube Viewer
12)      How does Rules Tracer help
a.       It traces FEEDERS, it ensures that selected leaf cells are feeding rules-calculated cells properly
b.   It checks FEEDERS, ensures that the children of selected consolidated cells are fed properly. Check Feeders options is available from consolidated cells and its not available from leaf node

Thx,
Shyam Gohil

Friday, October 19, 2012

Report Studio and Google maps - Part1

Problem Statement: User has a list report with 3 columns (Country Name, Planned Revenue, Revenue). User wants when he clicks on the country name, a Marker should pop-up in google map.

Steps:
1) Use Go Sales opr Go Datawarehouse package. and create the list report.
2) in below snapshot i have created a table with 2 columns.

3) As u can see I have used 5 HTML Items. So the code for all the item is as follow.
4) Code for 1st HTML Item: Set source type=Text
  <div onClick="AddMarker('
5) Code for 2nd HTML Item: Set source type=Report Expressions
             [Query1].[Country]
6) Code for 3rd HTML Item: Set source type=Text
             ')">

7) Code for 4th HTML Item: Set source type=Text
             </div>
8) Code for 5th HTML Item: This is the place where I have defined the code for google map.

<script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false"></script>
<div id="map" style="width: 700px; height: 400px"></div>
<script type="text/javascript">
 var latlng = new google.maps.LatLng(40.756, -73.986);
 var options = {
 center : latlng,
 zoom : 1,
 mapTypeId : google.maps.MapTypeId.ROADMAP
 };
 // Creating the map

 var map = new google.maps.Map(document.getElementById('map'), options);
 var geocoder = new google.maps.Geocoder();
 var marker= new google.maps.Marker(null);
 function AddMarker(address)
 {
  geocoder.geocode( {'address' : address}, function(results, status)
  {
   if (status == google.maps.GeocoderStatus.OK)
   {
    //map.setCenter(results[0].geometry.location);
    var marker = new google.maps.Marker( {map : map,position : results[0].geometry.location });
    var infowindow;
    if (!infowindow)
    {
     infowindow = new google.maps.InfoWindow();
    }
    infowindow.setContent(address);
    google.maps.event.addListener(marker, 'click', function()
    {
     infowindow.open(map,marker);
    });
   }
  });
 }
</script>

  9) Run the report and u will see a list having country names, Revenue, Planned Revenue and a google Map on right side of the report. 10) click on Country name and you will see Marker coming up.  
  Thanks a lot for Visiting the blogpost.   Let me know your queries, i will respond them as when i get time.   Thx, Shyam Gohil.

Wednesday, September 2, 2009

how to Digg

this article is more about how to use digg feature of website traffic.

i am just giving a try to setup digg button on my website.






Tuesday, July 28, 2009

Cognos reports and local file system

One of the issues which we faced during my new assignment was to store the report output on local computer’s file system. Till date we were just storing the output to only content store. So for us it was not the major task to maintain the report output. But later on as requirement changed we have had to come up with the solution.

After doing lots of analysis and research, I came up with following solution and that works guys!!!

1) open Cognos configuration on the server.
2) Edit global parameters
3) Go to general tab and give appropriate path to file system
4) Save the settings and then restart the services
5) Then go to Cognos connection and open dispatcher and services.



6) Click on define file system option

7) specify the folders and subfolders if required

8) select a report to store the output to local file system and provide appropriate parameters

9) And run the report in background.

10) Checkout the output in respective file folder.


this is just a draft version... i m yet to prepare complete document.....

comments are accepted.

Cheers!!!!
C1PH3R

Cognos sql and native sql….

Cognos sql and native sql….

Within my company I joined new account as a report developer. Its all together a new assignment I got from my company. This new assignment’s main motto is to optimize the query performance so that report generation will take less time to generate the reports.

Here, these guys are using DB2 mainframes for backend. And they are using Cognos to generate the reports. So whenever there is any call to generate the report from Cognos, a query is executed in the backend. Here just to inform u all that, a cost is associated to each query being executed on DB2 mainframe systems. So if report generation query takes 5 minutes to execute the report then some cost will be charged on that. So in order to minimize the cost I started using native sql feature of Cognos. This is truly amazing. I mean rather than using complex joins of the Cognos sql, we can tune the query performance in better way using native sql.

In my development of reports, I got the chance to use the native sql in few of the report. And that did work well. And it also improved the report execution time. So it was really a win win situation for all of us…

For me, I learned the new way of optimization of my queries and for my team, they got the logic to cut down the cost. It was great…