Feed aggregator

Report Google Map Plugin v1.0 Released

Jeff Kemp - Fri, 2019-07-19 11:25

Over the past couple of weeks I’ve been working on an overhaul of my Google Maps region for Oracle Application Express. This free, open-source plugin allows you to integrate fully-featured Google Maps into your application, with a wide range of built-in declarative features including dynamic actions, as well as more advanced API routines for running custom JavaScript with the plugin.

The plugin has been updated to Oracle APEX 18.2 (as that is the version my current system is using). Unfortunately this means that people still on older versions will miss out, unless someone is willing to give me a few hours on their APEX 5.0 or 5.1 instance so I can backport the plugin.

EDIT: Release 1.0.1 includes some bugfixes and a backport for APEX 5.0, 5.1 and 18.1.

The plugin is easy to install and use. You provide a SQL Query that returns latitude, longitude, and information for the pins, and the plugin does all the work to show them on the map.

The plugin has been rewritten to use the JQuery UI Widgets interface, at the suggestion of Martin D’Souza. This makes for a cleaner integration on any APEX page, and reduces the JavaScript footprint of each instance on the page if you need two or more map regions at the same time. This represented a rather steep learning curve for me personally, but I learned a lot and I’m pleased with the result. Of course, I’m sure I’ve probably missed a few tricks that the average JavaScript coder would say was obvious.

The beta releases of the plugin (0.1 to 0.10) kept adding more and more plugin attributes until it hit the APEX limit of 25 region-level attributes. This was obviously not very scaleable for future enhancements, so in Release 1.0 I ran the scythe through all the attributes and consolidated, replaced, or removed more than half of them – while preserving almost every single feature. This means v1.0 is not backwards compatible with the beta versions; although many attributes are preserved, others (including the SQL Query itself, which is rather important) would be lost in the conversion if the plugin was merely replaced. For this reason I’ve changed the Internal ID of the plugin. This is so that customers who are currently using a beta version can safely install Release 1.0 alongside it, without affecting all the pages where they are using the plugin. They can then follow the instructions to gradually upgrade each page that uses the plugin.

All of the plugin attributes relating to integrating the plugin with page items have been removed. Instead, it is relatively straightforward to use Dynamic Actions to respond to events on the map, and an API of JavaScript functions can be called to change its behaviour. All of this is fully documented and sample code can be found in the wiki.

New features include, but are not limited to:

  • Marker Clustering
  • Geo Heatmap visualisation (this replaces the functionality previous provided in a separate plugin)
  • Draggable pins
  • Lazy Load (data is now loaded in a separate Ajax call after the page is loaded)

The plugin attributes that have been added, changed or removed are listed here.

If you haven’t used this plugin before, I encourage you to give it a go. It’s a lot of fun and the possibilities presented by the Google Maps JavaScript API are extensive. You do need a Google Maps API Key which requires a Google billing account, but it is worth the trouble. It is recommended to put a HTTP Referer restriction on your API Key so that people can’t just copy your public key and use it on their own sites. For more information refer to the Installation Instructions.

If you are already using a beta version of the plugin in your application, please review the Upgrading steps before starting. Don’t panic! It’s not quite as simple as just importing the plugin into your application, but it’s not overly complicated. If you were using any of the Page Item integration attributes, you will need to implement Dynamic Actions to achieve the same behaviour. If you had any JavaScript integrations with the plugin, you will need to update them to use the new JQuery UI Widget API calls. I am keen for everyone to update to Release 1.0 as soon as possible, so I will provide free support (via email) for anyone needing help with this.

I am very keen to hear from everyone who is using the plugin, and how it is being used – please let me know in the comments below. If you notice a bug or have a great idea to enhance the plugin, please raise an issue on GitHub.

Links

Explaining Oracle’s Success in Cloud Applications

Oracle Press Releases - Wed, 2019-07-17 14:00
Blog
Explaining Oracle’s Success in Cloud Applications

By Michael Hickins—Jul 17, 2019

The University of Pittsburgh has moved its business to Oracle Cloud Applications to improve efficiency and organizational insights.

Every business and every business leader today wants to modernize using technology, all while reducing costs and, above all, eliminating complexity.

From insurance to health care to education to luxury goods, companies across all industries are juggling competing demands from employees, consumers, and partners, and are being challenged to make better decisions across the board, more quickly than ever.

And they’re looking to large business cloud providers like Oracle to not only simplify IT and make it more accessible to stakeholders, but to provide continuous improvements, such as the inclusion of intuitive AI and machine learning tools.

“Technology is an incredible enabler that can help organizations… not just streamline processes, but also improve engagement and transform existing business processes and models,” says Rondy Ng, senior vice president of applications development at Oracle.

Monte Ciotto, associate vice chancellor of financial information systems at the University of Pittsburgh, which recently decided to implement Oracle ERP Cloud, noted the critical importance of technology in the university’s ability to achieve its mission. “To maximize value and impact, our technology must be a coordinated effort across business functions,” he said. “With Oracle ERP Cloud we’ll be able to manage finance, HR and student data on the same platform, creating a single source of truth that improves efficiency and organizational insights.”

These types of organizational insights are in large part driven by analytics, AI, and ML functions of the type being embedded within the larger Oracle ERP Cloud and Oracle HCM Cloud application suites.

Oracle’s strong position is underlined by a statement from IDC cited by Oracle CEO Mark Hurd during the most recent earnings report: “Per IDC's latest annual market share results, Oracle gained the most market share globally out of all enterprise applications SaaS vendors three years running—in calendar year ‘16, ‘17 and ‘18.”*

Hurd noted a number of businesses that have recently chosen Oracle as its cloud provider, including Ferguson, a $21 billion wholesale plumbing equipment distributor, which is using Oracle ERP Cloud, along with EPM and supply chain.

Other recent converts to the Oracle Cloud Applications suite include Argo Insurance, Experian, Helmerich & Payne, Wright Medical, Emerson Electric, Rutgers University, Waste Management, and Tiffany.

If Oracle is succeeding so wildly in the enterprise cloud apps space, it’s only because it’s helping its customers succeed by making it easier for them to find answers and solutions to the mounting challenges that face them day to day.

“Education is evolving and the technology that drives our organization forward needs to reflect modern education best practices,” said Becky King, associate vice president of IT, Baylor University. “Shifting to Oracle Cloud Applications will help us introduce modern best practices that will make our organization more efficient and reach our goal of becoming a top-tier, Christian research institution. Moving core finance, planning and HR systems to one cloud-based platform will also improve business insight and enhance our ability to respond to changing dynamics in education.”

*Source: Per IDC’s latest annual market share results, Oracle gained the most market share globally out of all Enterprise Applications SaaS vendors three years running—in CY16, CY17 and CY18.

Source: IDC Public Cloud Services Tracker, April 2019. Enterprise Applications SaaS refers to the IDC SaaS markets CRM, Engineering, Enterprise Resource Management (including HCM, Financial, Enterprise Performance Management, Payroll, Procurement, Order Management, PPM, EAM), SCM, and Production and Operations Applications

Oracle Named a Leader in Retail Price Optimization Applications Worldwide

Oracle Press Releases - Wed, 2019-07-17 07:00
Press Release
Oracle Named a Leader in Retail Price Optimization Applications Worldwide Recognized for providing next-generation retail planning in the age of curated merchandise orchestration

Redwood Shores, Calif.—Jul 17, 2019

Oracle was recently named a Leader in the IDC MarketScape: Worldwide Retail Price Optimization Applications 2019 Vendor Assessment.1 The list includes price optimization vendors focused on the B2C business model based on their client base, use of advanced analytics, machine learning and artificial intelligence (AI), and prominence on enterprise retailers’ buying shortlist. Oracle Retail’s placement as a leader underscores its continued investment in analytics, optimization, and data sciences. With these offerings, retailers gain an unprecedented line of sight into the future performance of their assortments and the impact every promotional offer will have on the bottom line.

The IDC MarketScape report notes that “that traditional retail planning, of which life-cycle price optimization is a key part, has run its course. We’ve defined next-generation retail planning as curated merchandise orchestration (CMO). CMO is the central nervous system of enterprise and ecosystem signals that harmonizes its own and adjacent processes from design to deliver. Pricing spans the curating and orchestrating sides of CMO as a flywheel to create sufficient demand and efficient sell-through of inventory.” In the report, IDC recognized Oracle for their deep expertise and technology assets in complementary merchandising and supply chain planning analytics, execution, and operations as well as its omni-channel commerce platform.

Oracle Retail’s price, promotion, and markdown optimization applications leverage Oracle Retail Science Platform Cloud Service which combines AI, machine learning, and decision science with data captured from Oracle Retail SaaS applications as well as third-party data. The unique property of these self-learning applications is that they detect trends, learn from results, and increase their accuracy the more they are used, adding massive amounts of contextual data to get a clearer picture on what motivates outcomes. These pricing applications are complemented by a broad suite of retail planning, optimization, and execution applications—particularly financial planning.

“Retailers have a big opportunity to leverage machine learning and retail science to bring their business to the next level of ‘curated merchandise orchestration,’” said Jeff Warren, vice president, Oracle Retail. “Oracle provides retailers with next-practice price optimization applications based on our Retail Science Platform, empowering them to automate and predict the impact of pricing decisions while simultaneously delivering offers that delight their customers and increase redemption rates.”

Oracle Retail solutions considered in the IDC MarketScape: Worldwide Price Optimization Applications, 2019 Vendor Assessment, include Oracle Retail Merchandising System Cloud Service, Oracle Retail Customer Engagement Cloud Service, Oracle Retail Planning and Optimization Suite, and the Oracle Retail Science Platform Cloud Service.

Access a complimentary excerpt copy of the report for further details.

1 IDC MarketScape: Worldwide Retail Price Optimization Applications 2019 Vendor Assessment, doc #US45034619,  May 2019

Contact Info
Kris Reeves
Oracle
+1.650.506.5942
kris.reeves@oracle.com
About IDC MarketScape

IDC MarketScape vendor assessment model is designed to provide an overview of the competitive fitness of ICT (information and communications technology) suppliers in a given market. The research methodology utilizes a rigorous scoring methodology based on both qualitative and quantitative criteria that results in a single graphical illustration of each vendor’s position within a given market. IDC MarketScape provides a clear framework in which the product and service offerings, capabilities and strategies, and current and future market success factors of IT and telecommunications vendors can be meaningfully compared. The framework also provides technology buyers with a 360-degree assessment of the strengths and weaknesses of current and prospective vendors.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations, and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.650.506.5942

Schedule reboots of your AWS instances and how that can result in a hard reboot and corruption

Yann Neuhaus - Wed, 2019-07-17 02:50

From time to time you might require to reboot your AWS instances. Maye you applied some patches or for whatever reason. Rebooting an AWS instance can be done in several ways: You can of course do that directly from the AWS console. You can use the AWS command line utilities as well. If you want to schedule a reboot you can either do that using CloudWatch or you can use SSM Maintenance Windows for that. In this post we will only look at CloudWatch and System Manager as these two can be used to schedule the reboot easily using AWS native utilities. You could, of course, do that as well by using cron and the AWS command line utilities but this is not the scope of this post.

For CloudWatch the procedure for rebooting instances is the following: Create a new rule:

Go for “Schedule” and give a cron expression. In this case it means: 16-July-2019 at 07:45. Select the “EC2 RebootInstances API call” and provide the instance IDs you want to have rebooted. There is one limitation: You can only add up to five targets. If you need more then you have to use System Manager as described later in this post. You should pre-create an IAM role with sufficient permissions which you can use for this as otherwise a new one will be created each time.

Finally give a name and a description, that’s it:


Once time reaches your cron expression target the instance(s) will reboot.

The other solution for scheduling stuff against many instances is to use AWS SSM. It requires a bit more preparation work but in the end this is the solution we decided to go for as more instances can be scheduled with one maintenance window (up to 50) and you could combine several tasks, e.g. executing something before doing the reboot and do something else after the reboot.

The first step is to create a new maintenance window:

Of course it needs a name and an optional description:

Again, in this example, we use a cron expression for the scheduling (some as above in the CloudWatch example). Be aware that this is UTC time:

Once the maintenance window is created we need to attach a task to it. Until now we only specified a time to run something but we did not specify what to run. Attaching a task can be done in the task section of the maintenance window:

In this case we go for an “Automation task”. Name and description are not required:

The important part is the document to run, in our case it is “AWS-RestartEC2Instance”:

Choose the instances you want to run the document against:

And finally specify the concurrency and error count and again, an IAM role with sufficient permissions to perform the actions defined in the document:

Last, but not least, specify a pseudo parameter called “{TARGET_ID}” which will tell AWS SSM to run that against all the instances you selected in the upper part of the screen:

That’s it. Your instances will be rebooted at the time you specified in the cron expression. All fine and easy and you never have to worry about scheduled instance reboots. Just adjust the cron expression and maybe the list of instances and you are done for the next scheduled reboot. Really? We did it like that against 100 instances and we got a real surprise. What happened? Not many, but a few instances have been rebooted hard and one of them even needed to be restored afterwards. Why that? This never happened in the tests we did before. When an instance does not reboot within 4 minutes AWS performs a hard reboot. This can lead to corruption as stated here. When you have busy instances at the time of the reboot this is not what you want. On Windows you get something like this:

You can easily reproduce that by putting a Windows system under heavy load with a cpu stress test and then schedule a reboot as described above.

In the background the automation document calls aws:changeInstanceState and that comes with a force parameter:

… and here we have it again: Risk of corruption. When you take a closer look at the automation document that stops an EC2 instance you can see that as well:

So what is the conclusion of all this? It is not to blame AWS for anything, all is documented and works as documented. Testing in a test environment does not necessarily mean it works on production as well. Even if it is documented you might not expect it because your tests went fine and you missed that part of the documentation where the behavior is explained. AWS System Manager still is a great tool for automating tasks but you really need to understand what happens before implementing it in production. And finally: Working on public clouds make many things easier but others harder to understand and troubleshoot.

Cet article Schedule reboots of your AWS instances and how that can result in a hard reboot and corruption est apparu en premier sur Blog dbi services.

Age Range of Generations & Customer Demographics

VitalSoftTech - Tue, 2019-07-16 10:09

Being an online content producer and marketer, one of your primary concerns should be is your content and ad on your website, reaching the correct customer demographics and age ranges of generations?  To think of it, the question of who is your target audience is a challenging question. You must answer this question to make […]

The post Age Range of Generations & Customer Demographics appeared first on VitalSoftTech.

Categories: DBA Blogs

Downloading And Installing JDK 8 for OIC Connectivity Agent

Online Apps DBA - Tue, 2019-07-16 06:25

[Downloading And Installing JDK 8 for OIC Connectivity Agent] Some adapters/connectors use a connectivity agent to establish a connection with the on-premise system while a connectivity agent uses the JVM (Java Virtual Machine) to run code on the on-premise system. So, are you looking for the steps to ⬇Download and Install JDK Version 8 on […]

The post Downloading And Installing JDK 8 for OIC Connectivity Agent appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Intermittent cellsrv crashes with ORA-07445 after upgrading to 12.2.1.1.7

Syed Jaffar - Tue, 2019-07-16 04:05
Exadata X6-2 full and half racks were patched recently with 12.2.1.1.7 Aug/2018 quarterly patch set. An ORA-07445 started to observe and the cellsrv intermittently crashing with ORA-07445 error. 

Following error is noticed in the cellsrv alert.log:

ORA-07445: exception encountered: core dump [0000000000000000+0] [11] [0x000000000] [] [] [] 

The above is registered as a bug whenever cell storage is patched with 12.2.1.1.7 or 12.2.1.1.8. Therefore, if you are planning to patch your cell storage software with one of the above versions, ensure you also apply patch 28181789 to avoid cellsrv intermittently crashing with ORA-07445 error. Otherwise, you need to upgrade the storage software with 18.1.5.

Symptom

Customer may experience intermittent cellsrv crashes with ORA-07445: [0000000000000000+0] [11] [0x000000000] after upgrading storage cell software version to 12.2.1.1.7 or 12.2.1.1.8

Change

Upgrade the Storage cell software to 12.2.1.1.7 or 12.2.1.1.8

Cause

Bug 28181789 - ORA-07445: [0000000000000000+0] AFTER UPGRADING CELL TO 12.2.1.1.7

Fix

Patch 28181789 is available for 12.2.1.1.7 and 12.2.1.1.8 Storage software releases. Follow the README to instructions to apply the patch.
~OR~
apply 18.1.5 and later which includes the fix of 28181789

References;
xadata: Intermittent cellsrv crashes with ORA-07445: [0000000000000000+0] [11] [0x000000000] after upgrading to 12.2.1.1.7 or 12.2.1.1.8 (Doc ID 2421083.1)

Oracle EBS 12.2 - adop phase=cleanup fails with "library cache pin" waits

Senthil Rajendran - Tue, 2019-07-16 03:24
adop phase=cleanup fails with "library cache pin" waits

When running cleanup action in adop , it hung for ever.

Lets dig deeper to see the problem

SQL> SELECT s.sid,
       s.username,
       s.program,
       s.module from v$session s where module like '%AD_ZD%';

2313 APPS       perl@pwercd01vn074 (TNS V1-V3)                   AD_ZD                                                            


SQL> select event from v$session where sid=2313 ;

EVENT
----------------------------------------------------------------
library cache pin


SQL>


SQL> select decode(lob.kglobtyp, 0, 'NEXT OBJECT', 1, 'INDEX', 2, 'TABLE', 3, 'CLUSTER',
                      4, 'VIEW', 5, 'SYNONYM', 6, 'SEQUENCE',
  2    3                        7, 'PROCEDURE', 8, 'FUNCTION', 9, 'PACKAGE',
  4                        11, 'PACKAGE BODY', 12, 'TRIGGER',
  5                        13, 'TYPE', 14, 'TYPE BODY',
  6                        19, 'TABLE PARTITION', 20, 'INDEX PARTITION', 21, 'LOB',
                      22, 'LIBRARY', 23, 'DIRECTORY', 24, 'QUEUE',
                      28, 'JAVA SOURCE', 29, 'JAVA CLASS', 30, 'JAVA RESOURCE',
                      32, 'INDEXTYPE', 33, 'OPERATOR',
                      34, 'TABLE SUBPARTITION', 35, 'INDEX SUBPARTITION',
                      40, 'LOB PARTITION', 41, 'LOB SUBPARTITION',
                      42, 'MATERIALIZED VIEW',
                      43, '  7  DIMENSION',
                      44, 'CONTEXT', 46, 'RULE SET', 47, 'RESOURCE PLAN',
                      48, 'CONSUMER GROUP',
                      51, 'SUBSCRIPTION', 52, 'LOCATION',
                      55, 'XML SCHEMA', 56, 'JAVA DATA',
                      57, 'SECURITY PR  8  OFILE', 59, 'RULE',
                      62, 'EVALUATION CONTEXT',
                     'UNDEFINED') object_type,
       lob.KGLNAOBJ object_name,
       pn.KGLPNMOD lock_mode_held,
       pn.KGLPNREQ lock_mode_requested,
       ses.sid,
       ses.serial#,
       ses.userna  9  me
  FROM
       x$kglpn pn,
       v$session ses,
       x$kglob lob,
       v$session_wait vsw
  WHERE
   pn.KGLPNUSE = ses.saddr and
   pn.KGLPNHDL = lob.KGLHDADR
   and lob.kglhdadr = vsw.p1raw
   and vsw.event = 'library cache pin'
 10  order by lock_mode_held desc



OBJECT_TYPE        OBJECT_NAME          LOCK_MODE_HELD LOCK_MODE_REQUESTED        SID    SERIAL# USERNAME
------------------ -------------------- -------------- ------------------- ---------- ---------- ------------------------------
PACKAGE            DBMS_SYS_SQL                      2                   0       3822      60962 APPS
PACKAGE            DBMS_SYS_SQL                      2                   0       2313      27404 APPS
PACKAGE            DBMS_SYS_SQL                      2                   0       2313      27404 APPS
PACKAGE            DBMS_SYS_SQL                      2                   0       3822      60962 APPS
PACKAGE            DBMS_SYS_SQL                      0                   2       3821      14545
PACKAGE            DBMS_SYS_SQL                      0                   2       3821      14545
PACKAGE            DBMS_SYS_SQL                      0                   3       2313      27404 APPS

PACKAGE            DBMS_SYS_SQL                      0                   3       2313      27404 APPS


SQL> select
  2   distinct
   ses.ksusenum sid, ses.ksuseser serial#, ses.ksuudlna username,ses.ksuseunm machine,
   ob.kglnaown obj_owner, ob.kglnaobj obj_name
  3    4    5     ,pn.kglpncnt pin_cnt, pn.kglpnmod pin_mode, pn.kglpnreq pin_req
   , w.state, w.event, w.wait_Time, w.seconds_in_Wait
   -- lk.kglnaobj, lk.user_name, lk.kgllksnm,
  6    7    8     --,lk.kgllkhdl,lk.kglhdpar
   --,trim(lk.kgllkcnt) lock_cnt, lk.kgllkmod lock_mode, lk.kgllkreq lock_req,
  9   10     --,lk.kgllkpns, lk.kgllkpnc,pn.kglpnhdl
 from
  x$kglpn pn,  x$kglob ob,x$ksuse ses
   , v$session_wait w
 11   12   13   14  where pn.kglpnhdl in
 15  (select kglpnhdl from x$kglpn where kglpnreq >0 )
and ob.kglhdadr = pn.kglpnhdl
and pn.kglpnuse = ses.addr
and w.sid = ses.indx
order by seconds_in_wait desc
/

       SID    SERIAL# USERNAME                       MACHINE                        OBJ_OWNER            OBJ_NAME                PIN_CNT   PIN_MODE    PIN_REQ STATE               EVENT                                                         WAIT_TIME SECONDS_IN_WAIT
---------- ---------- ------------------------------ ------------------------------ -------------------- -------------------- ---------- ---------- ---------- ------------------- ---------------------------------------------------------------- ---------- ---------------
      2313      27404 APPS                           apprnrcod05                    SYS                  DBMS_SYS_SQL                  0          0          3 WAITING             library cache pin                                             0              701
      2313      27404 APPS                           apprnrcod05                    SYS                  DBMS_SYS_SQL                  2          2          0 WAITING             library cache pin                                             0              701
       803      46104 APPS                           orarnrcod05                    SYS                  DBMS_SYS_SQL                  2          2          0 WAITED SHORT TIME   control file sequential read                                 -1               24



Other errors on the adop logs , you can also use scanlogs to verify the errors.

[ERROR] [CLEANUP 1:1 ddl_id=69120] ORA-04020: deadlock detected while trying to lock object SYS.DBMS_SYS_SQL SQL: begin sys.ad_grants.cleanup; end;


Reference
Adop Cleanup Issue: "[ERROR] [CLEANUP] ORA-04020: deadlock detected " (Doc ID 2424333.1)

Fix : 

SQL> select count(1)
from dba_tab_privs
where table_name='DBMS_SYS_SQL'
and privilege='EXECUTE'
and grantee='APPS'  2    3    4    5  ;

  COUNT(1)
----------
         1

SQL> exec sys.ad_grants.cleanup;

PL/SQL procedure successfully completed.

SQL> select count(1)
from dba_tab_privs
where table_name='DBMS_SYS_SQL'
and privilege='EXECUTE'
and grantee='APPS'  2    3    4    5  ;

  COUNT(1)
----------
         0

SQL>



now adop cleanup fine without any issues.





Fix for Oracle VBCS "Error 404--Not Found"

Andrejus Baranovski - Tue, 2019-07-16 01:13
We are using Pay As You Go Oracle VBCS instance and had an issue with accessing VBCS home page after starting the service. The service was started successfully, without errors. But when accessing VBCS home page URL - "Error 404--Not Found" was returned.

I raised a support ticket and must say - received the response and help promptly. If you would encounter similar issue yourself, hopefully, this post will share some light.

Apparently "Error 404--Not Found" was returned, because VBCS instance wasn't initialized during instance start. It wasn't initialized, because of expired password for VBCS DB schemas. Yes, you read it right - internal system passwords expire in Cloud too.

Based on the instructions given by Oracle Support, I was able to extract logs from VBCS WebLogic instance (by connecting through SSH to VBCS cloud machine) and provide it to support team (yes, VBCS runs on WebLogic). They found password expire errors in the log, similar to this:

weblogic.application.ModuleException: java.sql.SQLException: ORA-28001: the password has expired

Based on provided instructions, I extracted VBCS DB schema name and connected through SQL developer. Then I executed SQL statement given by support team to reset all VBCS DB passwords in bulk. Next password expiry is set for 2019/January. Should it expire at all?

Summary: If you would encounter "Error 404--Not Found" after starting VBCS instance and trying to access VBCS home page, then most likely (but not always) it will be related to VBCS DB schema password expiry issue.

Delete MGMTDB and MGMTLSNR from OEM using emcli

Michael Dinh - Mon, 2019-07-15 18:22

Doc ID 1933649.1, MGMTDB and MGMTLSNR should not be monitored.

$ grep oms /etc/oratab 
oms:/u01/middleware/13.2.0:N

$ . oraenv <<< oms

$ emcli login -username=SYSMAN
Enter password : 
Login successful

$ emcli sync
Synchronized successfully

$ emcli get_targets -targets=oracle_listener -format=name:csv|grep -i MGMT
1,Up,oracle_listener,MGMTLSNR_host01

$ emcli delete_target -name="MGMTLSNR_host01" -type="oracle_listener" 
Target "MGMTLSNR_host01:oracle_listener" deleted successfully

$ emcli sync
$ emcli get_targets|grep -i MGMT

Note: MGMTDB was not monitored and can be deleted as follow:

$ emcli get_targets -targets=oracle_database -format=name:csv|grep -i MGMT
$ emcli delete_target -name="MGMTDB_host01" -type="oracle_database" 

The problem with monitoring MGMTDB and MGMTLSNR is getting silly page when they are relocated to a new host.

Host=host01
Target type=Listener 
Target name=MGMTLSNR_host01
Categories=Availability 
Message=The listener is down:

Dealing with the same issue for scan listener and have not reached an agreement to have them deleted as I and a few others think they should not be monitored.
Unfortunately, there is no official Oracle documentation for this.

Here’s a typical page for when all scan listeners are running from only one node.

Host=host01
Target type=Listener
Target name=LISTENER_SCAN2_cluster
Categories=Availability
Message=The listener is down: 

$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node node02
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node node02
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node node02

Email Spoofing

Yann Neuhaus - Mon, 2019-07-15 11:07

Have you ever had this unhealthy sensation of being accused of facts that do not concern you? To feel helpless in the face of an accusing mail, which, because of its imperative and accusing tone, has the gift of throwing us the opprobrium?

This is the purpose of this particular kind of sextortion mail that uses spoofing, to try to extort money from you. A message from a supposed “hacker” who claims to have hacked into your computer. He threatens you with publishing compromising images taken without your knowledge with your webcam and asks you for a ransom in virtual currency most of the time.

Something like that:

 

Date:  Friday, 24 May 2019 at 09:19 UTC+1
Subject: oneperson
Your account is hacked! Renew the pswd immediately!
You do not heard about me and you are definitely wondering why you’re receiving this particular electronic message, proper?
I’m ahacker who exploitedyour emailand digital devicesnot so long ago.
Do not waste your time and make an attempt to communicate with me or find me, it’s not possible, because I directed you a letter from YOUR own account that I’ve hacked.
I have started malware to the adult vids (porn) site and suppose that you watched this website to enjoy it (you understand what I mean).
Whilst you have been keeping an eye on films, your browser started out functioning like a RDP (Remote Control) that have a keylogger that gave me authority to access your desktop and camera.
Then, my softaquiredall data.
You have entered passcodes on the online resources you visited, I intercepted all of them.
Of course, you could possibly modify them, or perhaps already modified them.
But it really doesn’t matter, my app updates needed data regularly.
And what did I do?
I generated a reserve copy of every your system. Of all files and personal contacts.
I have managed to create dual-screen record. The 1 screen displays the clip that you were watching (you have a good taste, ha-ha…), and the second part reveals the recording from your own webcam.
What exactly must you do?
So, in my view, 1000 USD will be a reasonable amount of money for this little riddle. You will make the payment by bitcoins (if you don’t understand this, search “how to purchase bitcoin” in Google).
My bitcoin wallet address:
1816WoXDtSmAM9a4e3HhebDXP7DLkuaYAd
(It is cAsE sensitive, so copy and paste it).
Warning:
You will have 2 days to perform the payment. (I built in an exclusive pixel in this message, and at this time I understand that you’ve read through this email).
To monitorthe reading of a letterand the actionsin it, I utilizea Facebook pixel. Thanks to them. (Everything thatis usedfor the authorities may helpus.)

In the event I do not get bitcoins, I shall undoubtedly give your video to each of your contacts, along with family members, colleagues, etc?

 

Users who are victims of these scams receive a message from a stranger who presents himself as a hacker. This alleged “hacker” claims to have taken control of his victim’s computer following consultation of a pornographic site (or any other site that morality would condemn). The cybercriminal then announces having compromising videos of the victim made with his webcam. He threatens to publish them to the victim’s personal or even professional contacts if the victim does not pay him a ransom. This ransom, which ranges from a few hundred to several thousand dollars, is claimed in a virtual currency (usually in Bitcoin but not only).

To scare the victim even more, cybercriminals sometimes go so far as to write to the victim with his or her own email address, in order to make him or her believe that they have actually taken control of his or her account. 

First of all, there is no need to be afraid of it. Indeed, if the “piracy” announced by cybercriminals is not in theory impossible to achieve, in practice, it remains technically complex and above all time-consuming to implement. Since scammers target their victims by the thousands, it can be deduced that they would not have the time to do what they claim to have done. 

These messages are just an attempt at a scam. In other words, if you receive such a blackmail message and do not pay, nothing more will obviously happen. 

Then, no need to change your email credentials. Your email address is usually something known and already circulates on the Internet because you use it regularly on different sites to identify and communicate. These sites have sometimes resold or exchanged their address files with different partners more or less scrupulous in marketing objectives.

If cybercriminals have finally written to you with your own email address to make you believe that they have taken control of it: be aware that the sender’s address in a message is just a simple display that can very easily be usurped without having to have a lot of technical skills. 

In any case, the way to go is simple: don’t panic, don’t answer, don’t pay, just throw this mail in the trash (and don’t forget to empty it regularly). 

On the mail server side, setting up certain elements can help to prevent this kind of mail from spreading in the organization. This involves deploying the following measures on your mail server:

  •       SPF (Sender Policy Framework): This is a standard for verifying the domain name of the sender of an email (standardized in RFC 7208 [1]). The adoption of this standard is likely to reduce spam. It is based on the SMTP (Simple Mail Transfer Protocol) which does not provide a sender verification mechanism. SPF aims to reduce the possibility of spoofing by publishing a record in the DNS (Domain Name Server) indicating which IP addresses are allowed or forbidden to send mail for the domain in question.
  •         DKIM (DomainKeys Identified Mail): This is a reliable authentication standard for the domain name of the sender of an email that provides effective protection against spam and phishing (standardized in RFC 6376 [2]). DKIM works by cryptographic signature, verifies the authenticity of the sending domain and also guarantees the integrity of the message.
  •       DMARC (Domain-based Message Authentication, Reporting and Conformance): This is a technical specification to help reduce email misuse by providing a solution for deploying and monitoring authentication issues (standardized in RFC 7489 [3]). DMARC standardizes the way how recipients perform email authentication using SPF and DKIM mechanisms.

 

REFERENCES

[1] S. Kitterman, “Sender Policy Framework (SPF),” ser. RFC7208, 2014, https://tools.ietf.org/html/rfc7208

[2] D. Crocker, T. Hansen, M. Kucherawy, “DomainKeys Identified Mail (DKIM) Signatures” ser. RFC6376, 2011,  https://tools.ietf.org/html/rfc6376

[3] M. Kuchewary, E. Zwicky, “Domain-based Message Authentication, Reporting and Conformance (DMARC)”, ser. RFC7489, 2015, https://tools.ietf.org/html/rfc7489

Cet article Email Spoofing est apparu en premier sur Blog dbi services.

Forecast Model Tuning with Additional Regressors in Prophet

Andrejus Baranovski - Mon, 2019-07-15 04:17
I’m going to share my experiment results with Prophet additional regressors. My goal was to check how extra regressor would weight on forecast calculated by Prophet.

Using dataset from Kaggle — Bike Sharing in Washington D.C. Dataset. Data comes with a number for bike rentals per day and weather conditions. I have created and compared three models:

1. Time series Prophet model with date and number of bike rentals
2. A model with additional regressor —weather temperature
3. A model with additional regressor s— weather temperature and state (raining, sunny, etc.)

We should see the effect of regressor and compare these three models.

Read more in my Towards Data Science post.

apt-get install aspnetcore-runtime-2.2 fails on ubuntu: workaround with snap

Dietrich Schroff - Sun, 2019-07-14 05:10
Two week ago i tried to install a microsoft tool on my linux laptop and i got the following error:
So i tried:

apt-get install aspnetcore-runtime-2.2
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.
Statusinformationen werden eingelesen.... Fertig
Einige Pakete konnten nicht installiert werden. Das kann bedeuten, dass
Sie eine unmögliche Situation angefordert haben oder, wenn Sie die
Unstable-Distribution verwenden, dass einige erforderliche Pakete noch
nicht erstellt wurden oder Incoming noch nicht verlassen haben.
Die folgenden Informationen helfen Ihnen vielleicht, die Situation zu lösen:

Die folgenden Pakete haben unerfüllte Abhängigkeiten:
aspnetcore-runtime-2.2 : Hängt ab von: dotnet-runtime-2.2 (>= 2.2.5) soll aber nicht installiert werden
E: Probleme können nicht korrigiert werden, Sie haben zurückgehaltene defekte Pakete.

A quick workaround was to use the dotnet snap package:
snap install dotnet-sdk --classic
And then to a link from /usr/bin/dotnet to /snap/dotnet...:
$ ls -l /usr/bin/dotnet 
lrwxrwxrwx 1 root root 31 Jun 23 20:56 /usr/bin/dotnet -> /snap/dotnet-sdk/current/dotnet

Oracle Express Edition – features new to 18cXE

The Anti-Kyte - Sat, 2019-07-13 15:55

I learned a number of things watching the recently concluded Women’s Soccer World Cup.

  • it is possible for a human body to be fouled in the penalty area without then falling over as if it has just been shot (see Lisa-Marie Utland for Norway against England for proof)
  • England have developed a happy knack of reaching the Semi-Final of every tournament they enter
  • Alex Morgan is a tea-drinker

There were some complaints that Morgan’s celebration of her goal against England were disrespectful. Personally, I though it was rather witty. Let’s face it, if she’d really want to stir up some controversy, she’d have mimed putting the milk in first.
That said, she is going to face a challenge at the Olympics next year were she may herself up against a united Great Britain team.
If you’re not up on your sporting geopolitics, Great Britain (for now at least) comprises four nations – England, Wales, Northern Ireland and Scotland.
Should Morgan need to celebrate in a similar vein, the tea will be just the start. She’ll then need to neck a pint of Brains SA (known as “Skull Attack” in Cardiff) followed by a Guinness ( there is no border in Ireland when it comes to the Black Stuff) before moving on to a Scotch single-malt chaser.

Anyone looking for an object lesson in how to up their game could do far worse than have a look at how Oracle Express Edition has evolved from 11g to 18c…

“Hey Megan, how much extra stuff did Oracle squeeze into 18c Express Edition ?”

Using the License documentation for 18c XE and that of 11g XE, I’ve compiled a list of features which are now included in Express Edition but were not in 11gXE.
This is mainly for my own benefit as I keep being surprised when I find another – previously Enterprise Edition only – feature in Express Edition.
I’ve also listed the new stuff that wasn’t previously available in any edition of Oracle 11g.

Anyhow, for anyone who might find it useful…

Extra functionality in 18c

Using the Functional Categories mentioned in the license documents as a template, the features newly available in 18c Express Edition are :

Consolidation

Perhaps the most profound structural change is the advent of Multitenant functionality.
18c XE comes with Oracle Multitenant and allows up to three Pluggable Databases (PDBs).

Development Platform

SQLJ is now available.

High Availability
  • Online Index Rebuild
  • Online table organization
  • Online table redefinition
  • Trial recovery
  • Fast-start fault recovery
  • Flashback Table
  • Flashback Database
  • Cross-platform Backup and Recovery
Integration

Sharded Queues have been introduced in Oracle since 11g.

Networking

Network Compression is also new to Oracle since 11g.

Performance
  • Client Side Query Cache
  • Query Results Cache
  • PL/SQL Function Result Cache
  • Adaptive Execution Plans
  • In-Memory Column Store
  • In-Memory Aggregation
  • Attribute Clustering
Security
  • Column-Level Encryption
  • Tablespace Encryption
  • Oracle Advanced Security
  • Oracle Database Vault
  • Oracle Label Security
  • Centrally Managed Users
  • Fine-grained auditing
  • Privilege Analysis
  • Real Application Security
  • Redaction
  • Transparent Sensitive Data Protection
  • Virtual Private Database
Spatial and Graph Data

11g XE contained no spatial functionality at all. In 18c you get :

  • Oracle Spatial and Graph
  • Property Graph and RDF Graph Technologies (RDF/OWL)
  • Partitioned spatial indexes
VLDB, Data Warehousing, and Business Intelligence
  • Oracle Partitioning
  • Oracle Advanced Analytics
  • Oracle Advanced Compression
  • Advanced Index Compression
  • Prefix Compression (also called Key Compression)
  • Basic Table Compression
  • Deferred Segment Creation
  • Bitmapped index, bitmapped join index, and bitmap plan conversions
  • Transportable tablespaces, including cross-platform and full transportable export and import
  • Summary management—Materialized View Query Rewrite
Stuff that’s not included

Unlike it’s predecessor, 18cXE does not come with a version of Application Express (APEX). Fortunately, you can still get APEX and Oracle Rest Data Services for the same low, low price of – well, nothing – and install them separately.

I’m Kyle Benson and this is how I work

Duncan Davies - Fri, 2019-07-12 06:00

I’ve not blogged on this site for a while so it takes a special post to break the hiatus. I’m delighted to finally be able to share the “How I Work” entry for Kyle Benson, one half of the all-conquering PSAdmin.io duo. Kyle and Dan are super-busy, splitting their time between PeopleSoft consulting and the PSAdmin.io slack community, their Podcast, their conference and their website.  I’m thrilled that he has added his profile to our ‘How I Work‘ series.

kyle-hike

Name: Kyle Benson

Occupation: Independent PeopleSoft Consultant and Co-owner of psadmin.io
Location: Minneapolis, MN
Current computer: Dell Precision 5510
Current mobile devices: Pixel 2
I work: To keep from getting bored. I have a ton of fun solving tough problems and optimizing things.

What apps/software/tools can’t you live without?

Besides your phone and computer, what gadget can’t you live without?
Saying I “can’t live without” this is overstating it, but I love my home automation gadgets. I have been slowly adding more and more to my home. Lately my pace has slowed down so my family can keep up with my craziness. I’m currently using a SmartThing Gen. 1 HUB and I’m liking the ecosystem.  That reminds me, time to upgrade!

What’s your workspace like?
I split time between client sites and my home office. I like to use a standing desk and keep it rather tidy. I love my ultrawide monitor and have a “studio” step for creating psadmin.io content.

kyle-desk

What do you listen to while you work?
I love to put on mellow, ambient, downtempo style music. I often listen to the same playlist on repeat for months. Something about the relaxing, repetitive sounds helps get me in “flow” faster. The artist Blackmill really started me down this road. The current playlists I’m listening to on Spotify are ‘Atmospheric Calm’ and ‘Soundscapes For Gaming’.

What PeopleSoft-related productivity apps do you use?
I love Phire for development and git for DPK/admin scripts. Having the history and flexibility to migrate is so nice. Using psadmin-plus helps a lot, too!

Do you have a 2-line tip that some others might not know?
Make sure you are using aliases so you aren’t wasting time typing! Here is a short list of aliases I use often, mostly related to changing directories.

  • cddpk
    • Change to the DPK base directory
  • cdcfg
    • When using multiple $PS_CFG_HOMEs on a server, change to the config homes base directory
  • cdweb $domain_name
    • Change to the PORTAL.war directory of a domain
  • pupapp $environment
    • Run puppet apply for an $environment (ie. production)

What SQL/Code do you find yourself writing most often?
Currently I’ve found myself living in the browser development tools. I’ve been exploring some of the new JavaScript that Fluid and Unified Navigation introduces. I do a lot of debugging, playing in console, etc to find out how some of these features work. This is all pretty complex stuff and you can really get lost down the rabbit hole.

What would be the one item you’d add to PeopleSoft if you could?
Current CPU archives in DPK.

What everyday thing are you better at than anyone else?
I love riddles and puzzles. I’ve been really into escape rooms this past year, too.

How do you keep yourself healthy and happy?
Getting outside with the family year round is key. Living in a place like Minnesota, you learn a lack of vitamin D and cabin fever is no joke. Walking, hiking, biking in the summer. Biking and cross country skiing in the winter. Also, the family heads up to the North Shore of Lake Superior every few months. These weekend getaways are always a great recharge.

What’s the best advice you’ve ever received?
Find a job you enjoy doing, and you will never have to work a day in your life.

Merging OBIEE 12c .RPD binary files directly in Git

Rittman Mead Consulting - Thu, 2019-07-11 15:41
Let's talk about OBIEE concurrent development!

Enabling concurrent development for OBIEE RPD is a recurring theme in OBIEE blogs. Full support for RPD development with Gitflow has long since been part of the Rittman Mead's BI Developer Toolkit and is described in great detail in Minesh's blog post. What you are currently reading is a follow-up to Minesh's post, but taking it one step further: instead of calling Python scripts to perform Gitflow steps, we want to perform all those steps directly from our Git client (including the ones performing a merge, like feature finish), be it command line or a visual application like Sourcetree.

RPD versioning directly in Git - do we need that?

How is versioning directly in a Git client better than calling Python scripts? First of all, it is the convenience of using the same approach, the same tool for all content your need to version control. A Python script will have to come with instructions for its use, whereas every developer knows how to use Git. Last but not least, a 3-way merge, which is used for Gitflow's feature finish, release finish and hotfix finish commands, requires three repositories that need to be passed to the script in the right order. Doing merges in your Git client would be quicker and less error prone.

What is a Git merge?

Before we proceed with discussing our options for merging OBIEE RPDs, let us quickly recap on how Git merges work.

There are two types of Git merges: Fast-forward Merges and 3-way Merges. Strictly speaking, Fast-forward Merges are no merges at all. If the base branch has not seen any changes whilst you worked on your feature branch, merging the feature back into the base simply means 'fast-forwarding' the base branch tip to your feature branch tip, i.e. your feature becomes the new base. That is allowed because the two branches have not diverged - the histories of the base and the feature branches form a single history line.

When the two branches have diverged, i.e. when the base has been modified by the time we want to merge our feature, a 3-way merge is the only option.

In the above diagram, feature 1 can be fast-forward merged whereas feature 2 must be 3-way merged into the develop branch.

Note that because a Fast-forward Merge is not an actual merge but rather a replacement, it is not relevant what content is being merged. The 3-way Merge however, depending on the content being merged, can be quite challenging or even impossible. And can result in merge conflicts that require manual resolution.

So... can Git 3-way merge RPDs?

OBIEE RPD can be saved in two formats: a single binary .rpd file or one or many .xml files (depending on what rpd-to-xml conversion method you use). The choice here seems obvious - it is common knowledge that Git cannot reliably 3-way merge binary files. So XML format it is. Or is it?

Like any other text file, Git certainly can merge XML files. But will it produce an XML that is still recognised as a consistent OBIEE RPD? Well, there are some OBIEE developer teams that have reported success with this approach. My own experience even with the most trivial of RPD changes shows that somewhere during the .xml to .rpd conversion, then introducing changes in the .rpd and in the end converting it back to .xml, the XML tags get reshuffled and sometimes their identifiers can change as well. (Equalising RPD objects is supposed to help with the latter.) I found no standard Git merge algorithm that would reliably and consistently perform RPD merge for XML format produced this way, be it a single large XML file or a collection of small XML files.

Fortunately, there is a better (and less risky) way.

Creating a Git custom merge driver

It is possible to create custom Git merge drivers and then assign them to specific file extensions (like .rpd) in the .gitattributes file - as described in Git documentation. According to the guide, defining a custom merge driver in Git is really straight forward: just add a new entry to the .git/config file:

[merge "filfre"]
	name = feel-free merge driver
	driver = filfre %O %A %B %L %P
	recursive = binary

Here, filfre is the code name of the custom merge driver, feel-free merge driver is the descriptive name of it (hardly used anywhere) and the driver value is where we define the driver itself. It is a shell command for your operating system. Typically it would call a shell script or a binary executable. It can be a java -jar execution or a python my-python-script.py call. The latter is what we want - we have already got a 3-way merge script for OBIEE RPD in the Rittman Mead's BI Developer Toolkit, as blogged by Minesh.

For the script to know about the content to be merged, it receives the following command line arguments: %O %A %B %L %P. These are the values that Git passes to the custom merge driver:

  • %O - this is the Base or the Original for the 3-way merge. If we are using Git Flow, this is the develop branch's version, from which our feature branch was created;
  • %A - this is the Current version for the 3-way merge. If we are using Git Flow, this is the feature branch that we want to merge back into develop;
  • %B - this is the Other or the Modified version of the 3-way merge. If we are using Git Flow, this is the develop branch as it is currently (diverged from the original Base), when we want to merge our feature branch back into it.

There are two more values, which we do not need and will ignore: %L is Conflict marker size, e.g. 7 for '>>>>>>>'. This is irrelevant for us, because we are handling binary files. %P is the full path name where the merge result will be stored - again irrelevant for us, because Python is capable of getting full paths for the files it is handling, in case it needs it.

Creating a Git custom merge driver for OBIEE .rpd binary files

What we need here is a Python script that performs a 3-way RPD merge by calling OBIEE commands comparerpd and patchrpd from command line. Please note that OBIEE creates a 4th file as the output of the merge, whereas a git merge driver is expected to overwrite the Current (%A) input with the merge result. In Python, that is quite doable.

Another important thing to note is that the script must return exit code 0 in case of a success and exit code 1 in case there were merge conflicts and automatic merge could not be performed. Git determines the success of the merge solely based on the exit code.

Once we have the Python script ready and have tested it standalone, we open our local Git repository folder where our OBIEE .rpd files will be versioned and open the file <repo root>/.git/config for editing and add the following lines to it:

[merge "rpdbin"]
    name = binary RPD file merge driver
    driver = python C:/Developer/bi_developer_toolkit/git-rpd-merge-driver.py %O %A %B

Our Python script expects 3 command line arguments - names of .rpd files: Base (%O), Current (%A) and Modified (%B). Those will be temporary files, created by Git in run time.

Once the config file is modified, create a new file <repo root>/.gitattributes and add the following line to it:

*.rpd merge=rpdbin

This assumes that your binary RPD files will always have the extension .rpd. If with a different extension, the custom merge driver will not be applied to them.

And that is it - we are done!

Note: if you see that the custom merge driver works from the Git command line tool but does not work in Sourcetree, you may need to run Sourcetree as Administrator.

Trying it out

We will use Sourcetree as our Git/Gitflow client - it is really good at visualising the versioning flow and shows the currently available Gitflow commands for the currently checked out branch.

We will use the RPD from Oracle Sample Application v602 for OBIEE 12c 12.2.1.1.0. for our testing.

After initialising Gitflow in our Git repository, we add the out-of-the-box Sample Apps RPD to our repository's develop branch - that will be our Base.

Then we create two copies of it and modify each copy to introduce changes we would like to see merged. In the screenshots below, you can see Business Models and Databases renamed. But I did also change the content of those Business Models.

Repo 1:

Repo 2:

Now we create a new feature branch and overwrite the Base rpd it contains with our Repo 1 rpd.

As the next step, we check out the develop branch again and replace the Base rpd there with Repo 2 rpd.

Note that we need to make sure the develop branch is different from the original Base when we finish our feature. If the develop branch will be the same as the original Base when we finish the feature, a fast-forward merge will be done instead and our custom merge driver will not be applied.

The result should look like this in Sourcetree. You can see a fork, indicating that the develop and the feature8 branches have diverged:

We are ready to test our custom 3-way merge driver. In Sourcetree, from the Gitflow menu, select Finish Feature.

Confirm your intention to merge the feature back into develop.

If all goes as planned, Git will call your custom merge driver. In Sourcetree, click the Show Full Output checkbox to see the output from your script. In my script, I tagged all output with a [Git RPD Merge Driver] prefix (except the output coming from external functions). This is what my output looks like:

Now let us check the result: make sure the develop branch is checked out, then open the merged RPD in the Admin tool.

We can see that it worked - we can now do full Gitflow lifecycle for OBIEE .rpd files directly in Git.

But what if the merge fails?

If the merge fails, the feature branch will not be deleted and you will have to merge the .rpd files manually in the OBIEE Admin tool. Note that you can get the Current, the Modified and the Base .rpd files from Git. Once you are happy with your manual merge result, check out the develop branch and add it there.

Categories: BI & Warehousing

2019 Oracle ITA National Fall Championships Come to Newport Beach, California

Oracle Press Releases - Thu, 2019-07-11 12:00
Press Release
2019 Oracle ITA National Fall Championships Come to Newport Beach, California

TEMPE, Ariz.—Jul 11, 2019

The Intercollegiate Tennis Association (ITA) and Oracle announced today that Newport Beach Tennis Club and The Tennis Club at Newport Beach Country Club will serve as host sites for the 2019 Oracle ITA National Fall Championships November 6–10. The men’s and women’s finals will be held at Newport Beach Tennis Club.

The event returns to Southern California for the second time in the last three years. Arizona’s Surprise Tennis & Racquet Complex held the tournament in 2018. The JW Marriott Desert Springs Resort and Indian Wells Tennis Garden co-hosted in 2017.

“Oracle’s commitment to college tennis continues to help move our sport to the forefront of intercollegiate athletics,” ITA Chief Executive Officer Dr. Timothy Russell said. “The ITA is proud that our championships are some of the best in college sports. We are very excited to come to Newport Beach, which promises to ensure a fantastic student-athlete experience.”

The Newport Beach Tennis Club features 19 lighted tennis courts and a sunken center court with stadium seating. It has hosted numerous professional events throughout its history, including the Davis Cup and Oracle Challenger Series. The Tennis Club at Newport Beach offers 24 outdoor courts.

“Oracle remains committed to collegiate tennis and ensuring young players get the opportunity to improve their games and compete in great venues,” Oracle CEO Mark Hurd said.  “We’re looking forward to seeing American collegians and juniors play some terrific tennis at this year’s Oracle ITA National Championships.”

The Oracle ITA National Fall Championships features 128 of the nation’s top collegiate singles players (64 men and 64 women) and 64 doubles teams (32 men’s teams and 32 women’s teams). It is the only event on the collegiate tennis calendar that highlights competitors from all five divisions (NCAA Divisions I, II, III, NAIA, and Junior College) playing in the same tournament. Now in its third year, the event replaced the ITA National Indoor Intercollegiate Championships.

The Oracle ITA National Fall Championships joins the Oracle ITA Masters as one of two major collegiate tournaments held in the Southern California area and co-sponsored by Oracle and the ITA. The Oracle Masters returns to Pepperdine University and the Malibu Racquet Club for the fifth consecutive year and is scheduled for Sept. 26–29.

Contact Info
Mindi Bach
Oracle Corporate Communications
650-506-3221
mindi.bach@oracle.com
Al Barba
Director of Communications, Marketing & Advanced Media, ITA
602-687-6379
abarba@itatennis.com
About the Intercollegiate Tennis Association

The Intercollegiate Tennis Association (ITA) is committed to serving college tennis and returning the leaders of tomorrow. As the governing body of college tennis, the ITA oversees men’s and women’s varsity tennis at NCAA Divisions I, II and III, NAIA and Junior/Community College divisions. The ITA administers a comprehensive awards and rankings program for men’s and women’s varsity players, coaches and teams in all divisions, providing recognition for their accomplishments on and off the court. For more information on the ITA, visit the ITA website at www.itatennis.com, like the ITA on Facebook or follow @ITA_Tennis on Twitter and Instagram.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Mindi Bach

  • 650-506-3221

Al Barba

  • 602-687-6379

Migrating your users from md5 to scram authentication in PostgreSQL

Yann Neuhaus - Thu, 2019-07-11 03:43

One of the new features in PostgreSQL 10 was the introduction of stronger password authentication based on SCRAM-SHA-256. How can you migrate your existing users that currently use md5 authentication to the new method without any interruption? Actually that is quite easy, as you will see in a few moments, but there is one important point to consider: Not every client/driver does already support SCRAM-SHA-256 authentication so you need to check that before. Here is the list of the drivers and their support for SCRAM-SHA-256.

The default method that PostgreSQL uses to encrypt password is defined by the “password_encryption” parameter:

postgres=# show password_encryption;
 password_encryption 
---------------------
 md5
(1 row)

Let’s assume we have a user that was created like this in the past:

postgres=# create user u1 login password 'u1';
CREATE ROLE

With the default method of md5 the hashed password looks like this:

postgres=# select passwd from pg_shadow where usename = 'u1';
               passwd                
-------------------------------------
 md58026a39c502750413402a90d9d8bae3c
(1 row)

As you can see the hash starts with md5 so we now that this hash was generated by the md5 algorithm. When we want this user to use scram-sha-256 instead, what do we need to do? The first step is to change the “password_encryption” parameter:

postgres=# alter system set password_encryption = 'scram-sha-256';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
postgres=# select current_setting('password_encryption');
 current_setting 
-----------------
 scram-sha-256
(1 row)

From now on the server will use scram-sha-256 and not anymore md5. But what happens when our user wants to connect to the instance once we changed that? Currently this is defined in pg_hba.conf:

postgres=> \! grep u1 $PGDATA/pg_hba.conf
host    postgres        u1              192.168.22.1/24         md5

Even though the default is not md5 anymore the user can still connect to the instance because the password hash did not change for that user:

postgres=> \! grep u1 $PGDATA/pg_hba.conf
host    postgres        u1              192.168.22.1/24         md5

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -h 192.168.22.100 -p 5433 -U u1 postgres
Password for user u1: 
psql (13devel)
Type "help" for help.

postgres=> 

Once the user changed the password:

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -h 192.168.22.100 -p 5433 -U u1 postgres
Password for user u1: 
psql (13devel)
Type "help" for help.

postgres=> \password
Enter new password: 
Enter it again: 
postgres=> 

… the hash of the new password is not md5 but SCRAM-SHA-256:

postgres=# select passwd from pg_shadow where usename = 'u1';
                                                                passwd                               >
----------------------------------------------------------------------------------------------------->
 SCRAM-SHA-256$4096:CypPmOW5/uIu4NvGJa+FNA==$PNGhlmRinbEKaFoPzi7T0hWk0emk18Ip9tv6mYIguAQ=:J9vr5CQDuKE>
(1 row)

One could expect that from now on the user is not able to connect anymore as we did not change pg_hba.conf until now:

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -h 192.168.22.100 -p 5433 -U u1 postgres
Password for user u1: 
psql (13devel)
Type "help" for help.

postgres=> 

But in reality that still works as the server now uses the SCRAM-SHA-256 algorithm. So once all the users changed their passwords you can safely switch the rule in pg_hba.conf and you’re done:

postgres=> \! grep u1 $PGDATA/pg_hba.conf
host    postgres        u1              192.168.22.1/24         scram-sha-256

postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

You just need to make sure that all the users do not have a hash starting with md5 but the new one starting with SCRAM-SHA-256.

Cet article Migrating your users from md5 to scram authentication in PostgreSQL est apparu en premier sur Blog dbi services.

GoCardless Banks on NetSuite to Support International Expansion

Oracle Press Releases - Thu, 2019-07-11 03:00
Press Release
GoCardless Banks on NetSuite to Support International Expansion NetSuite Helps Innovative UK Fintech Company Enhance Financial Operations and Reshape Global Payments Industry

LONDON, UK—Jul 11, 2019

GoCardless, a global direct debit network headquartered in the UK, has selected Oracle NetSuite to support its mission to take the pain out of getting paid for businesses with recurring revenue. With NetSuite, the fintech company, which grew by 60 percent in the last year, has been able to automate financial management and help reduce the complexities of operating across multiple markets, currencies and tax laws as it rapidly expands its international operations.

Founded in 2012, GoCardless has created a global bank debit network to rival credit and debit cards, as well as a platform designed to take invoice, subscription, membership and installment payments. As demand for its services grows, with $10 billion in transactions a year and 40,000 customers around the world, GoCardless needed a single, scalable business platform that could provide the visibility and control required to optimise its financial reporting. After a careful evaluation, GoCardless selected NetSuite to manage and automate core business processes.

“Since implementing NetSuite, we have gone from basic accounting to conducting in-depth financial analysis,” said Catherine Birkett, CFO, GoCardless. “We can now report financial close faster and more accurately, quickly and easily setup new subsidiaries, and efficiently meet our stakeholders’ reporting requirements. This is incredibly valuable as we continue to expand into new markets and the best part about NetSuite is we now have a solution that will scale with our growth path for years to come.”

With NetSuite, GoCardless will be able to increase the agility of its financial operations as it expands globally. By gaining a unified view into the business, GoCardless will be better enabled to address the complexity it faces with entering new international markets and make decisions more confidently and quickly.

“GoCardless has a very advanced business model that is changing the way organisations collect payments,” said Nicky Tozer, VP of EMEA, Oracle NetSuite. “As its network expands to cover North America, Australia and more than 30 European countries, GoCardless needed a single and scalable business platform that could support its future growth and that’s why it selected NetSuite.”

Contact Info
Samuel Jamieson
PR Manager, EMEA
+44 (0)7468 752231
sjamieson@netsuite.com
About GoCardless

GoCardless is a global leader in recurring payments. GoCardless’ global payments network and technology platform take the pain out of getting paid for businesses with recurring revenue. More than 40,000 businesses worldwide, from multinational corporations to SMBs, transact through GoCardless each month, and the business processes $10bn of payments each year. GoCardless now has five offices: UK, France, Australia, Germany and USA.

About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials/Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 18,000 customers in 203 countries and dependent territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Samuel Jamieson

  • +44 (0)7468 752231

Adding a Fluid WorkCenter to a Navigation Collection

Jim Marion - Wed, 2019-07-10 20:36

Oracle has done an outstanding job converting Classic Self-service to Fluid to promote the modern, mobile user experience. But what about back-office functionality? We certainly can't predict the future, but it seems that back-office transactions will remain Classic for a very long time. Rather than change the appearance of the back-office user experience, I believe our best strategy is to build back-office, business process-based navigation. Our users don't seem excited about the NavBar and Navigator and we can nearly eliminate its use through properly constructed business process based navigation. Here are a couple of business process based navigation tools:

  • Navigation Collections
  • Master Detail
  • Dashboards
  • Activity Guides
  • WorkCenters

Because of its simplicity and ease of maintenance, we often recommend customers start with Tile Wizard-based Navigation Collections. Oracle, on the other hand, is providing business process based navigation by converting Classic WorkCenters to Fluid WorkCenters.

In a recent attempt to provide a segue from one business process to another, I added a Fluid WorkCenter to a Navigation Collection. Both a Tile Wizard-based Navigation Collection and a Fluid WorkCenter contain a left-hand sidebar. Embedding one in another creates a Left-panel Collision. To avoid this collision, I marked the Navigation Collection item Replace Window property. Unfortunately, trying to launch the Fluid WorkCenter from a Navigation Collection generated an SQL error. This prompted me to try launching the Fluid WorkCenter outside the Navigation Collection. To my surprise, this also generated an SQL error. The WorkCenter worked before adding it to a WorkCenter, so this was clearly unexpected. After reviewing the app server log, I discovered a single-row subquery within the Fluid WorkCenter framework was returning more than one row. It didn't do this before adding the Fluid WorkCenter to a Navigation Collection, so what changed? One thing: I added a Fluid WorkCenter to a Navigation Collection. The SQL that caused the problem looks for any CREF that uses the WorkCenter's target component and is marked as a Fluid Workcenter (contains &FLWC=Y in the CREF additional parameters). By adding a Fluid WorkCenter CREF to a Navigation Collection, I created a CREF Link to the original CREF. The end result was a second matching row in the portal registry table (PSPRSMDEF).

Lesson learned: Don't add a Fluid WorkCenter to a Navigation Collection or any other structure that will result in a second CREF with the same (or similar) target. This makes sense because Fluid WorkCenters are business process-based navigation. Adding business process-based navigation to business process-based navigation may not make sense.

Is there a workaround? Absolutely! Instead of adding the Fluid WorkCenter directly to a Navigation Collection, create a redirect iScript. The PeopleCode in the iScript will send the user to the existing Fluid WorkCenter content reference rather than duplicating the existing content reference in the Navigation Collection.

Is the workaround worth the effort? That is an entirely different question. First, the effort is minimal and will require just a few lines of PeopleCode and a Permission List update. But what is the savings and user experience impact? Fluid WorkCenters are designed to be launched as homepage tiles. To launch a homepage tile, you must be on a homepage. The user savings, therefore, is the user won't have to return to a homepage to launch the next business process but can transfer directly from one to the next. Returning to the prior business process is as simple as clicking the Fluid header back button.

Configuring productive Business Process navigation is critical to successful Fluid implementation. Are you ready to learn more? Register now for our Fluid 1 course online. Do you have a whole team to train? Contact us for group pricing and delivery options.

Pages

Subscribe to Oracle FAQ aggregator