Feed aggregator
Hi,
I am using APEX_WEB_SERVICE.MAKE_REQUEST instead of utp_http package to consume rest api calls. Is there are way to close APEX_WEB_SERVICE.MAKE_REQUEST similar to UTL_HTTP.END_RESPONSE?
I am getting ORA-29270: too many open HTTP requests (more than 5), need to call the api due to pagination but the oracle limitation is not allowing.
I was under the impression that APEX_WEB_SERVICE.MAKE_REQUEST does the opening and ending response automatically.
Thank you
Hi Team
We are implementing a change data capture setup in our environment where the source database has TDE in place. The encryption is performed using oracle key management and we would want to know till how long the old masterkey will be valid once after we reset the masterkey either automatically based on the 180 days timeline or manually
Hi Tom,
when I try to add two new disk , once for failgroup , I receive this error:
<code>
ALTER DISKGROUP OCR ADD
FAILGROUP FGA disk '/dev/oracleasm/disks/pippo_OCR1_A_alfa' NAME pippo_OCR1_A_alfa
FAILGROUP FGH disk '/dev/oracleasm/disks/pippo_OCR1_H_alfa' NAME pippo_OCR1_H_alfa
REBALANCE POWER 8;
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-59700: Failgroup FGH already belongs to site cedh
ORA-59700: Failgroup FGA already belongs to site ceda
</code>
But if I query the view v$asm_disk I found that the name of failgroups are:
<code>
Disk Group Name Path File Name Fail Group FAILGROUP_LABEL SITE_LABEL
-------------------- ------------------------------------------ -------------------------- ----------- ---------------- ------------
OCR /dev/oracleasm/disks/CEDA_OCR1 OCR_0000 FGA
/dev/oracleasm/disks/CEDA_OCR2 OCR_0001 FGA
/dev/oracleasm/disks/CEDH_OCR1 OCR_0002 FGH
/dev/oracleasm/disks/CEDH_OCR2 OCR_0003 FGH
/voting_disk/asm-nfs-ocrvote03 OCR_0004 FGQ
********************
Disk Group
[CANDIDATE] /dev/oracleasm/disks/pippo_OCR1_A_alfa
/dev/oracleasm/disks/pippo_OCR1_H_alfa
</code>
If I change the statements , exchange FGA -> CCEDA and FGH -> CEDH , I've ora-00600 error:
ALTER DISKGROUP OCRVOTE
ADD
FAILGROUP CEDA disk '/dev/oracleasm/disks/RPU_HTH_DB_OCR1_A_UNITY' NAME RPU_HTH_DB_OCR1_A_UNITY
FAILGROUP CEDH disk '/dev/oracleasm/disks/RPU_HTH_DB_OCR1_H_UNITY' NAME RPU_HTH_DB_OCR1_H_UNITY
REBALANCE POWER 8;
# ALTER DISKGROUP OCRVOTE
# *
# ERROR at line 1:
# ORA-00600: internal error code, arguments: [kfgSiteNumPopulate02], [], [], [], [], [], [], [], [], [], [], []
Do you have any suggestions?
Thans a lot,
Andrea
Hi Tom,
Is it advisable to have more BLOB fields in a table? (Planning to have 20), Each BLOB field will be stored with small amount of data per row.
table may have 1.5 million rows.
what will be disadvantages (if any) if we have more BLOB fields in a table.
Is there a SQL script I can use to determine if a table is a good candidate to be shrunk or moved to reclaim space. I'm not looking to use segment advisor, I want something in SQL.
My ultimate goal is to automate this task in PLSQL (without reinventing the wheel) by identifying tables, enabling/disabling row movement and
any foreign keys and report on the time taken and spaced reclaimed.
Below is a test CASE with a CLOB. But many of my tables will have one or more Foriegn keys.
<code>create table t1(c1 clob) lob (c1) store as basicfile;
insert into t1 select lpad('X', 4000) from dual connect by level<=1000;
select bytes/1024/1024 from user_segments where segment_name =
(select segment_name from user_lobs where table_name = 'T1' and column_name = 'C1');
BYTES/1024/1024
9
alter table t1 enable row movement;
delete from t1 where rownum <= 900;
alter table t1 modify lob (c1) (shrink space);
select bytes/1024/1024 from user_segments where segment_name =
(select segment_name from user_lobs where table_name = 'T1' and column_name = 'C1');
BYTES/1024/1024
.9375
</code>
Below are three Resource Manager Plan directives:
<code>
begin
DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'EQ_ALLOCATION',
group_or_subplan => 'STAGE',
mgmt_p2 => 100,
parallel_degree_limit_p1 => 2);
DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'EQ_ALLOCATION',
group_or_subplan => 'APPLICATION',
mgmt_p1 => 100,
parallel_degree_limit_p1 => 6);
DBMS_RESOURCE_MANAGER.create_plan_directive (
plan => 'EQ_ALLOCATION',
group_or_subplan => 'OTHER_GROUPS',
comment => '50% Priority for other operation on 2nd level',
mgmt_p2 => 50,
parallel_degree_limit_p1 => 1);
end;
/
</code>
below is the code to associate the resource directives with Schemas:
<code>
BEGIN
-- Assign users to consumer groups
DBMS_RESOURCE_MANAGER_PRIVS.grant_switch_consumer_group(
grantee_name => 'USER1',
consumer_group => 'STAGE',
grant_option => FALSE);
DBMS_RESOURCE_MANAGER.set_initial_consumer_group('USER1', 'STAGE');
DBMS_RESOURCE_MANAGER_PRIVS.grant_switch_consumer_group(
grantee_name => 'USER2',
consumer_group => 'MART',
grant_option => FALSE);
DBMS_RESOURCE_MANAGER.set_initial_consumer_group('USER2', 'MART');
END;
/
</code>
As per the Oracle documentation, resource available with on mgmt_p1 level with be given to mgmt_p2 processes if needed.
Questions:
1) How to check CPU count/SGA/PGA that are being used by a particular session?
2) Will Oracle resource manager work since the begining of the session or it will kick in once cpu reaches 100% utilization mark? I tried doing some POC on my local.I put a max limit on parallel degree, say 5 and executed a query using parallel(10) hint. Few executions happened with 5 threads, others happened with 10 threads though the consumer group was should the new plan and I had implemented.
3) If I mgmt_p1 process kicks in, when mgmt_p2 process are in session with 100% CPU utilization, how Oracle will make the resources available for mgmt_p1 process?
4) What can I do take the CPU utilization on 100% mark?
We have a table with an END_DATE column defined as
<code>
Column_Name Data_Type Nullable Data_Default
END_DATE DATE Yes (null)</code>
During the period of 24-oct-2007 07:36:04 to 23-sep-2019 11:41:41 a number (but not all) of the rows have a dump century of 10,000 for END_DATE
<code>select dump(end_date), end_date, to_char(end_date,'dd-mon-yyyy hh24:mi:ss') from rate where rate_id =8076496;</code>
<code>
DUMP(END_DATE) END_DATE TO_CHAR(END_DATE,'DD-MON-YYYY HH24:MI:SS')
Typ=12 Len=7: 200,122,12,31,1,1,1 31-DEC-22 00-000-0000 00:00:00
</code>
while others have the expected century, 2000
<code>
DUMP(END_DATE) END_DATE TO_CHAR(END_DATE,'DD-MON-YYYY HH24:MI:SS')
Typ=12 Len=7: 120,120,12,31,1,1,1 31-DEC-20 31-DEC-2020 00:00:00
</code>
The problem went away after 23-sep-2019 11:41:41, but a customer who queries for rows including the period of 24-oct-2007 07:36:04 to 23-sep-2019 11:41:41 encounters errors in processing the "00-000-0000 00:00:00" values.
While we could fix the problematic rows with SQL, the customer wants us to tell them why the problem could occur in the first case.
So the question is, how could this erroneous century get into an Oracle database in the first place?
Following is step by step instruction as how to install Ecoute AI tool for live transcription that provides real time transcripts for both microphone input and speakers output. There are lot of use cases for Ecoute in Job interview training for example.
Commands used in video to install ecoute on Windows:
Prerequisites:
- Chocolatey
- Python >=3.8.0
- An OpenAI API key
- Windows OS (Not tested on others)
- FFmpeg
Step 1: Install Chocolaty
Open Powershell as administrator and run following in powershell window:
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
Step 2: Install FFMPEG which is audio and video converter.
choco install ffmpeg
Step 3: Clone ecoute repo
git clone https://github.com/SevaSk/ecoute cd ecoute pip install -r requirements.txt
Step 4: Get free Open API Key from https://platform.openai.com/account/api-keys
Step 5: Create keys.py file in ecoute folder and insert following in that file.
OPENAI_API_KEY="API KEY"
Step 6: Run the ecoute AI
python main.py --api
Reference: https://github.com/SevaSk/ecoute
Something as simple as patching made convoluted by unclear documentation.
Ignacio from Oracle support was a great help by offering Zoom session.
https://blogs.oracle.com/dataintegration/post/oracle-goldengate-veridata-12214220831-is-now-available
33953823 – OGG Veridata Bundle Patch 12.2.1.4.220831 (PS4 BP6) (Server+Agent) ============================================================ From README.txt
3. Verify the OUI Inventory. ============================================================
OPatch needs access to a valid OUI inventory to apply patches.
Note: This needs the ORACLE_HOME to be set(refer section “2. Pre-Installation Instructions”) prior to run the below commands:
Validate the OUI inventory with the following commands:
$ opatch lsinventory -jre $ORACLE_HOME/oracle_common/jdk/jre
Note: Make sure the JDK version you use is the certified version for your product.
If the command errors out, contact Oracle Support and work to validate and verify the inventory setup before proceeding.
[oracle@localhost ~]$ $ORACLE_HOME/OPatch/opatch lsinventory -help
-jre This option tells OPatch to use JRE (java) from the specified location instead of the default location under Oracle Home
============================================================ ### NOTE: ============================================================
NO PROCESSES ARE RUNNING AND ONLY WLS AND VERIDATA HAVE BEEN INSTALLED
veridata_12.2.1.4.0 wls_infra_12.2.1.4.0
============================================================ ### START PATCHING ============================================================
[oracle@localhost patch]$ pwd /vagrant/software/patch [oracle@localhost patch]$ ls -l total 60584 -rwxrwxrwx. 1 vagrant vagrant 54494225 Jun 6 02:36 p28186730_1394212_Generic.zip -rwxrwxrwx. 1 vagrant vagrant 7539013 Jun 6 02:36 p33953823_122140_Generic.zip [oracle@localhost patch]$ unzip -qo p33953823_122140_Generic.zip; echo $? 0 [oracle@localhost patch]$ ls -l total 60860 drwxrwxrwx. 1 vagrant vagrant 0 Sep 15 2022 33953823 -rwxrwxrwx. 1 vagrant vagrant 268073 Sep 20 2022 oracle-goldengate-veridata-release-notes_12.2.1.4.220831.pdf -rwxrwxrwx. 1 vagrant vagrant 54494225 Jun 6 02:36 p28186730_1394212_Generic.zip -rwxrwxrwx. 1 vagrant vagrant 7539013 Jun 6 02:36 p33953823_122140_Generic.zip -rwxrwxrwx. 1 vagrant vagrant 11408 Sep 21 2022 README.txt [oracle@localhost patch]$ export ORACLE_HOME=/opt/oracle/wls [oracle@localhost patch]$ $ORACLE_HOME/OPatch/opatch version OPatch Version: 13.9.4.2.1
OPatch succeeded. [oracle@localhost patch]$ $ORACLE_HOME/OPatch/opatch lspatches There are no Interim patches installed in this Oracle Home “/opt/oracle/wls”.
OPatch succeeded. [oracle@localhost patch]$ cd 33953823 [oracle@localhost 33953823]$ pwd /vagrant/software/patch/33953823 [oracle@localhost 33953823]$
[oracle@localhost 33953823]$ $ORACLE_HOME/OPatch/opatch apply /vagrant/software/patch/33953823 Oracle Interim Patch Installer version 13.9.4.2.1 Copyright (c) 2023, Oracle Corporation. All rights reserved.
Oracle Home : /opt/oracle/wls Central Inventory : /opt/oracle/oraInventory from : /opt/oracle/wls/oraInst.loc OPatch version : 13.9.4.2.1 OUI version : 13.9.4.0.0 Log file location : /opt/oracle/wls/cfgtoollogs/opatch/opatch2023-06-06_02-46-11AM_1.log
OPatch detects the Middleware Home as “/opt/oracle/wls”
Verifying environment and performing prerequisite checks… OPatch continues with these patches: 33953823
Do you want to proceed? [y|n] y User Responded with: Y All checks passed.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = ‘/opt/oracle/wls’)
Is the local system ready for patching? [y|n] y User Responded with: Y Backing up files… Applying interim patch ‘33953823’ to OH ‘/opt/oracle/wls’ ApplySession: Optional component(s) [ oracle.veridata.agent.core, 12.2.1.4.0 ] , [ oracle.veridata.agent.core, 12.2.1.4.0 ] not present in the Oracle Home or a higher version is found.
Patching component oracle.fmw.upgrade.veridata, 12.2.1.4.0…
Patching component oracle.veridata.web.core, 12.2.1.4.0…
Patching component oracle.veridata.web.core, 12.2.1.4.0…
Patching component oracle.rcu.veridata, 12.2.1.4.0… Patch 33953823 successfully applied. Log file location: /opt/oracle/wls/cfgtoollogs/opatch/opatch2023-06-06_02-46-11AM_1.log
OPatch succeeded. [oracle@localhost 33953823]$
[oracle@localhost 33953823]$ $ORACLE_HOME/OPatch/opatch lsinventory Oracle Interim Patch Installer version 13.9.4.2.1 Copyright (c) 2023, Oracle Corporation. All rights reserved.
Oracle Home : /opt/oracle/wls Central Inventory : /opt/oracle/oraInventory from : /opt/oracle/wls/oraInst.loc OPatch version : 13.9.4.2.1 OUI version : 13.9.4.0.0 Log file location : /opt/oracle/wls/cfgtoollogs/opatch/opatch2023-06-06_02-48-44AM_1.log
OPatch detects the Middleware Home as “/opt/oracle/wls”
Lsinventory Output file location : /opt/oracle/wls/cfgtoollogs/opatch/lsinv/lsinventory2023-06-06_02-48-44AM.txt
——————————————————————————– Local Machine Information:: Hostname: localhost ARU platform id: 226 ARU platform description:: Linux x86-64
Interim patches (1) :
Patch 33953823 : applied on Tue Jun 06 02:46:42 UTC 2023 Unique Patch ID: 24943512 Patch description: “One-off” Created on 14 Sep 2022, 19:25:44 hrs PST8PDT Bugs fixed: 34322822, 34202990, 33979317, 34275308, 30257704, 34028901, 33885974 33750454, 33719877, 33627880, 32585259, 32133466, 28279315, 33188570 20403129, 32852132, 32286962, 32821397, 32461542, 32213540, 32322787 32717596, 32313798, 30533210, 32531882, 32258415, 32486366, 32050877 32250963, 32249623, 32113971, 30351843, 32348306, 29376272, 30608181 30771003, 30811737, 30923601, 31126930, 31152591, 31290595, 31344851 31401520, 31442050, 31568607, 30778093, 21276396, 30712451, 25418342 30392409, 30425385, 30558507
——————————————————————————–
OPatch succeeded. [oracle@localhost 33953823]$
This is the collection of open source code snippets in Python which you can use to download, finetune, and generate content with LLM on your laptop: import tensorflow as tf import numpy as np import os import json import random import time import argparse with open(os.path.join(args.model_path, "hyperparams.json"), "r") as f: hyperparams = json.load(f) model = tf.compat.v1.estimator.Estimator( model_fn=model_fn, model_dir=args.output_path, params=hyperparams, config=tf.compat.v1.estimator.RunConfig( save_checkpoints_steps=5000, keep_checkpoint_max=10, save_summary_steps=5000 )) #Define the input function for the dataset def input_fn(mode): dataset = tf.data.TextLineDataset(args.dataset_path) dataset = dataset.repeat() dataset = dataset.shuffle(buffer_size=10000) dataset = dataset.batch(args.batch_size) dataset = dataset.map(lambda x: tf.strings.substr(x, 0, hparams["n_ctx"])) iterator = dataset.make_one_shot_iterator() return iterator.get_next() #Define the training function def train(): for epoch in range(args.epochs): model.train(input_fn=lambda: input_fn(tf.estimator.ModeKeys.TRAIN)) print(f"Epoch {epoch+1} completed.") #Start the training train() #Define the command-line arguments parser = argparse.ArgumentParser() parser.add_argument("--model_path", type=str, required=True) parser.add_argument("--length", type=int, default=110) parser.add_argument("--temperature", type=float, default=0.8) args = parser.parse_args() #Load the fine-tuned model with open(os.path.join(args.model_path, "hyperparams.json"), "r") as f: hyperparams = json.load(f) model_fn = model_fn(hyperparams, tf.estimator.ModeKeys.PREDICT) model = tf.compat.v1.estimator.Estimator(model_fn=model_fn, model_dir=args.model_path, params=hyperparams) #Define the generation function def generate_text(length, temperature): start_token = "<|startoftext|>" tokens = tokenizer.convert_tokens_to_ids([start_token]) token_length = len(tokens) while token_length < length: prediction_input = np.array(tokens[-hyperparams["n_ctx"]:]) output = list(model.predict(input_fn=lambda: [[prediction_input]]))[0]["logits"] logits = output[-1] / temperature logits = logits - np.max(logits) probs = np.exp(logits) / np.sum(np.exp(logits)) token = np.random.choice(range(hyperparams["n_vocab"]), p=probs) tokens.append(token) token_length += 1 output_text = tokenizer.convert_ids_to_tokens(tokens) output_text = "".join(output_text).replace("▁", " ") output_text = output_text.replace(start_token, "") return output_text #Generate text text = generate_text(args.length, args.temperature) print(text)
Let us have a brief look at Oracle 23c database parameters marked as deprecated in the database that can be in some respect related to security. Here are the parameters are marked as deprecated in 23c: SQL> col name for.... [Read More] Posted by Pete On 05/06/23 At 11:24 AM
SQL-Profile fundamentally are additional optimizer statistics at the SQL statement level generated by SQL-Tuning-Advisor to fill the gaps of Oracle Optimizer.
My question is, can these additional finer optimizer-statistics within profiles, be shared/used by other similar SQL statements. Or is it that only that specific SQL for which the sql-profile was generated benefits?
We have a system in which reference data (about 100 tables) is created and maintained by a set SMEs in a specific database for this purpose. On frequent (by not regular) occasions, the schema is cloned to another schema which acts as a source for distribution to other databases. The reason for the second schema is so those SMEs are not interrupted for the duration of the distribution to 20ish other database/schemas.
We have significant problems with the hand-coded and maintained ETL process. For instance, when a new column is added to a reference table, someone must remember to add this to the ETL, and it is not infrequently forgotten. Dependency order of foreign keys can also bollix the process.
This distribution is on demand and not continuous, so it doesn't appear that regular database replication would work. Do you have any suggestions?
Second question, is it possible for delayed foreign keys and unique constraints to result in a non-acid compliant data condition.
Thanx in advance.
In response to your request for clarification, yes, the "ETL" is cloning of the base schema to the secondary schema. In this case, there is not transformation; it is a literal clone. There was a time when there was massive transformations, and the term just stuck.
Hello,
How to configure TLS for connections from Oracle database to another server (e.g mailing server)?
We have an on premises SMTP email sever to which the database pushes the emails to be sent out. All internal emails are working perfectly but external emails (e.g to Gmail) are failing with ORA-29024: Certificate validation failure.
The TLS wallet has been created on the database server and the SMTP email sever certificate (certificate are still valid) have been added to this wallet. The code has also been updated with wallet location and wallet password.
The mailing team confirmed they did not need our server certificate. At this point we are not sure what we are missing on our setting.
Please, we'll appreciate if you can point us to a documentation/video or provide us with advise on how to figure it out.
Thank you
Hi, Tom, I do two tests about cursor in ORACLE 19.3C. In test1 why print "x" is normal but print "y" is abnormal ? In test2 why print "y" run half of the way ? I get information from: https://asktom.oracle.com/pls/apex/asktom.search?tag=ora-08103object-no-longer-exists
<b>[Test1]</b>
<code>SQL> select count(*) from dba_objects;
COUNT(*)
----------
13770
create table tx as select * from dba_objects where rownum < 11;
create table ty as select * from dba_objects;
variable x refcursor
variable y refcursor
declare
begin
open :x for select * from tx;
open :y for select * from ty;
end;
/
drop table tx;
drop table ty;
SQL> print x
OWNER OBJECT_NAME ....
--------- ----------------
SYS ICOL$
SYS I_USER1
SYS CON$
SYS UNDO$
SYS C_COBJ#
SYS I_OBJ#
SYS PROXY_ROLE_DATA$
SYS I_IND1
SYS I_CDEF2
SYS I_OBJ5
10 rows selected.
SQL> print y
ERROR:
ORA-08103: object no longer exists
no rows selected</code>
<b>[Test2]</b>
<code>create table tx as select * from dba_objects where rownum < 11;
create table ty as select * from dba_objects;
variable x refcursor
variable y refcursor
declare
nomeaning tx%rowtype;
begin
open :x for select * from tx;
open :y for select * from ty;
fetch :y into nomeaning;
end;
/
drop table tx;
drop table ty;
SQL> print x
OWNER OBJECT_NAME ....
--------- ----------------
SYS ICOL$
SYS I_USER1
SYS CON$
SYS UNDO$
SYS C_COBJ#
SYS I_OBJ#
SYS PROXY_ROLE_DATA$
SYS I_IND1
SYS I_CDEF2
SYS I_OBJ5
10 rows selected.
SQL> print y
OWNER OBJECT_NAME ....
--------- ----------------
... ...
SYS SYS_LOB0000006212C00003$$
ERROR:
ORA-08103: object no longer exists
6210 rows selected.</code>
This video introduces db developer role in Oracle database 23c.
I would like my schema to delete rows automatically after a certain number of days.
Therefore, I was planning to use this statement for example:
<code>
ALTER TABLE MY_TABLE USING TTL 5 days
</code>
However, I'm unable to find specifics in the TTL documentation about how FK relationships are handled in this scenario.
Will TTL automatically ignore these FK relationships when it reaches that date and expire these rows anyway?
(OR)
Will TTL fail, as a delete would fail, because these FK relationships exist and need to be removed first?
Hi,
I would like to create initally deferred constraint for FK as below. The problem is, that the constraint cars_service_car_id_fk is validated after DELETE statement, not at the end of transaction(COMMIT). I don't understand this behavior...
<code>
DROP TABLE cars;
CREATE TABLE cars(
car_id INTEGER
, name VARCHAR2(20)
, CONSTRAINT cars_pk PRIMARY KEY(car_id)
);
DROP TABLE cars_service;
CREATE TABLE cars_service(
service_id INTEGER
, car_id INTEGER
, CONSTRAINT cars_service_pk PRIMARY KEY(service_id)
, CONSTRAINT cars_service_car_id_fk
FOREIGN KEY (car_id)
REFERENCES cars(car_id) <b>DEFERRABLE INITIALLY DEFERRED</b>
);
INSERT INTO cars(car_id, name)
VALUES(1, 'Volvo');
INSERT INTO cars_service(service_id, car_id)
VALUES(1, 1);
COMMIT;
DELETE FROM cars
WHERE car_id = 1;
-- And now Iim getting error to early..
<b>
DELETE FROM cars
WHERE car_id = 1
Error report -
ORA-02292:</b>
</code>
Pages
|