文章详情

短信预约-IT技能 免费直播动态提醒

请输入下面的图形验证码

提交验证

短信预约提醒成功

oracle clusterware命令集的分类:

2024-04-02 19:55

关注
oracle clusterware命令集的分类:

节点层:olsnodes

网络层:oifcfg

集群层:crsctl ocrcheck ocrdump ocrconfig

应用层:srvctl onsctl crs_stat

比较:
grid用户:

[grid@dbrac1 ~]$ which olsnodes
/oracle/app/11.2.0/grid/bin/olsnodes
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ which oifcfg
/oracle/app/11.2.0/grid/bin/oifcfg
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ which crsctl
/oracle/app/11.2.0/grid/bin/crsctl
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ which ocrcheck
/oracle/app/11.2.0/grid/bin/ocrcheck
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ which ocrconfig
/oracle/app/11.2.0/grid/bin/ocrconfig
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ which ocrdump
/oracle/app/11.2.0/grid/bin/ocrdump
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ which srvctl
/oracle/app/11.2.0/grid/bin/srvctl
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ which onsctl
/oracle/app/11.2.0/grid/bin/onsctl
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ which crs_stat
/oracle/app/11.2.0/grid/bin/crs_stat

oracle用户:
[oracle@dbrac1 ~]$  which olsnodes
/usr/bin/which: no olsnodes in (/oracle/app/oracle/product/11.2.0/dbhome_1/bin:/oracle/app/oracle/product/11.2.0/dbhome_1/OPatch:/sbin:/bin:/usr/sbin:/usr/bin)
[oracle@dbrac1 ~]$
[oracle@dbrac1 ~]$ which oifcfg
/usr/bin/which: no oifcfg in (/oracle/app/oracle/product/11.2.0/dbhome_1/bin:/oracle/app/oracle/product/11.2.0/dbhome_1/OPatch:/sbin:/bin:/usr/sbin:/usr/bin)
[oracle@dbrac1 ~]$
[oracle@dbrac1 ~]$ which crsctl
/usr/bin/which: no crsctl in (/oracle/app/oracle/product/11.2.0/dbhome_1/bin:/oracle/app/oracle/product/11.2.0/dbhome_1/OPatch:/sbin:/bin:/usr/sbin:/usr/bin)
[oracle@dbrac1 ~]$
[oracle@dbrac1 ~]$ which ocrcheck
/usr/bin/which: no ocrcheck in (/oracle/app/oracle/product/11.2.0/dbhome_1/bin:/oracle/app/oracle/product/11.2.0/dbhome_1/OPatch:/sbin:/bin:/usr/sbin:/usr/bin)
[oracle@dbrac1 ~]$
[oracle@dbrac1 ~]$ which ocrconfig
/usr/bin/which: no ocrconfig in (/oracle/app/oracle/product/11.2.0/dbhome_1/bin:/oracle/app/oracle/product/11.2.0/dbhome_1/OPatch:/sbin:/bin:/usr/sbin:/usr/bin)
[oracle@dbrac1 ~]$
[oracle@dbrac1 ~]$ which ocrdump
/usr/bin/which: no ocrdump in (/oracle/app/oracle/product/11.2.0/dbhome_1/bin:/oracle/app/oracle/product/11.2.0/dbhome_1/OPatch:/sbin:/bin:/usr/sbin:/usr/bin)
[oracle@dbrac1 ~]$
[oracle@dbrac1 ~]$ which srvctl
/oracle/app/oracle/product/11.2.0/dbhome_1/bin/srvctl
[oracle@dbrac1 ~]$
[oracle@dbrac1 ~]$  which onsctl
/oracle/app/oracle/product/11.2.0/dbhome_1/bin/onsctl
[oracle@dbrac1 ~]$
[oracle@dbrac1 ~]$ which crs_stat
/usr/bin/which: no crs_stat in (/oracle/app/oracle/product/11.2.0/dbhome_1/bin:/oracle/app/oracle/product/11.2.0/dbhome_1/OPatch:/sbin:/bin:/usr/sbin:/usr/bin)
[oracle@dbrac1 ~]$


一、节点层:
[grid@dbrac1 ~]$ olsnodes -h
Usage: olsnodes [ [-n] [-i] [-s] [-t] [<node> | -l [-p]] | [-c] ] [-g] [-v]
        where
                -n print node number with the node name
                -p print private interconnect address for the local node
                -i print virtual IP address with the node name
                <node> print information for the specified node
                -l print information for the local node
                -s print node status - active or inactive
                -t print node type - pinned or unpinned
                -g turn on logging
                -v Run in debug mode; use at direction of Oracle Support only.
                -c print clusterware name


二、网络层
[grid@dbrac1 ~]$ oifcfg -h

Name:
        oifcfg - Oracle Interface Configuration Tool.

Usage:  oifcfg iflist [-p [-n]]
        oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}...
        oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ]
        oifcfg delif {{-node <nodename> | -global} [<if_name>[/<subnet>]] [-force] | -force}
        oifcfg [-help]

        <nodename> - name of the host, as known to a communications network
        <if_name>  - name by which the interface is configured in the system
        <subnet>   - subnet address of the interface
        <if_type>  - type of the interface { cluster_interconnect | public }


三、集群层
1.crsctl
[grid@dbrac1 ~]$ crsctl
Usage: crsctl <command> <object> [<options>]
    command: enable|disable|config|start|stop|relocate|replace|status|add|delete|modify|getperm|setperm|check|set|get|unset|debug|lsmodules|query|pin|unpin|discover|release|request
For complete usage, use:
    crsctl [-h | --help]
For detailed help on each command and object and its options use:
    crsctl <command> <object> -h  e.g. crsctl relocate resource -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl enable -h
Usage:
  crsctl enable crs
     Enable OHAS autostart on this server

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl disable -h
Usage:
  crsctl disable crs
     Disable OHAS autostart on this server

[grid@dbrac1 ~]$ crsctl config -h
Usage:
  crsctl config crs
     Display OHAS autostart config on this server

[grid@dbrac1 ~]$ crsctl start -h
Usage:
  crsctl start resource {<resName> [...]|-w <filter>]|-all} [-n <server>] [-k <cid>] [-d <did>] [-env "env1=val1,env2=val2,..."] [-f] [-i]
     Start designated resources
where
     resName [...]    One or more blank-separated resource names
     -w               Resource filter
     -all             All resources
     -n               Server name
     -k               Resource cardinality ID
     -d               Resource degree ID
     -env             Attribute overrides for this command
     -f               Force option
     -i               Fail if request cannot be processed immediately


  crsctl start crs [-excl [-nocrs]|-nowait]
     Start OHAS on this server
where
     -excl        Start Oracle Clusterware in exclusive mode
     -nocrs       Start Oracle Clusterware in exclusive mode without starting CRS
     -nowait      Do not wait for OHAS to start

  crsctl start cluster [[-all]|[-n <server>[...]]]
    Start CRS stack
where
    Default         Start local server
    -all            Start all servers
    -n              Start named servers
    server [...]    One or more blank-separated server names



  crsctl start ip -A {<IP_name>|<IP_address>}/<net_mask>/<interface_name>
    Start an IP on the given interface with specified net mask
Where
    IP_name              Name which resolves to an IP. If it is not a fully qualified domain name then standard name search will be used.
    IP_address           IP address
    net_mask             Subnet mask for the IP to start
    interface_name       Interface on which to start the IP

  crsctl start testdns [-address <IP_address>] [-port <port>][-domain <GNS_domain>] [-once][-v]
    Start a test DNS listener that listens on the given address at the given port and for specified domain
Where
    IP_address          IP address to be used by the listener (defaults to hostname)
    port                The port on which the listener will listen. Default value is 53.
    domain              The domain query for which to listen. By default, all domain queries are processed.

    -once               Flag indicating that DNS listener should exit after one DNS query packet is received
    -v                  Verbose output

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl stop -h
Usage:
  crsctl stop resource {<resName>[...]|-w <filter>|-all} [-n <server>] [-k <cid>] [-d <did>] [-env "env1=val1,env2=val2,..."] [-f] [-i]
     Stop designated resources
where
     resName [...]    One or more blank-separated resource names
     -w               Resource filter
     -all             All resources
     -n               Server name
     -k               Resource cardinality ID
     -d               Resource degree ID
     -env             Attribute overrides for this command
     -f               Force option
     -i               Fail if request cannot be processed immediately

  crsctl stop crs [-f]
    Stop OHAS on this server
where
    -f              Force option

  crsctl stop cluster [[-all]|[-n <server>[...]]] [-f]
    Stop CRS stack
where
    Default         Stop local server
    -all            Stop all servers
    -n              Stop named servers
    server [...]    One or more blank-separated server names
    -f              Force option

  crsctl stop ip -A {<IP_name>|<IP_address>}/<interface_name>
     Stop the designated IP address
Where
     IP_name              Name which resolves to an IP. If it is not fully qualified domain name then standard name search will be performed
     IP_address           IP address
     interface_name       Interface on which IP was started

  crsctl stop testdns [-address <IP_address>] [-port <port>] [-v]
    Stop the test DNS listener that listens on the given address and at the given port
Where
    IP_address          IP address on which testdns was started (defaults to hostname)
     port                The port on which the listener is listening. Default value for the port is 53.
    -v                  Verbose output

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl relocate -h
Usage:
  crsctl relocate resource {<resName> [-k <cid>]| {resName|-all} -s <server>|-w <filter>} [-n <server>] [-env "env1=val1,env2=val2,..."] [-f] [-i]
     Relocate designated resources
where
     resName        Resource named resource
     -all           Relocate all resources
     -s             Source server
     -w             Resource filter
     -n             Destination server
     -k             Cardinality ID
     -env           Attribute overrides for this command
     -f             Force option
     -i             Fail if request cannot be processed immediately

  crsctl relocate server <server> [...] -c <spName> [-f] [-i]
     Relocate designated servers
where
     server [...]   One or more blank-separated server names
     spName         Destination server pool name
     -f             Force option
     -i             Fail if request cannot be processed immediately

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl replace -h
Usage:
  crsctl replace {discoverystring <ds_string>| votedisk [<+diskgroup>|<vdisk> ... <vdisk>]}
where
    ds_string   comma-separated voting file paths without spaces and enclosed in quotes
    diskgroup   diskgroup where the voting files will be located in ASM
    vdisk       location of the voting files outside of ASM separated by space

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl status -h
Usage:
  crsctl status {resource|type|serverpool|server} [<name>|-w <filter>] [-g]
where
    name   CRS entity name
    -w     CRS entity filter
    -g     Check if CRS entities are registered

  crsctl status ip -A {<IP_name>|<IP_address>}
     Check if the IP is alive
Where
     IP_name        Name which resolves to an IP. If name is not fully qualified domain name then standard name search will be used.
     IP_address     IP address

  crsctl status testdns [-address <IP_address>] [-port <port>] [-v]
     Check status of DNS server for specified domain
Where
     IP_address          DNS server address (defaults to hostname)
     port                The port on which the DNS server is listening. Default value for the port is 53.
     -v                  Verbose output
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl add -h
Usage:
  crsctl add {resource|type|serverpool} <name> <options>
where
   name          Name of the CRS entity
   options       Options to be passed to the add command

   See individual CRS entity help for more details

  crsctl add crs administrator -u <user_name> [-f]
where  
   user_name     User name to be added to the admin list or "*"
   -f            Override user name validity check

  crsctl add css votedisk <vdisk>[...] <options>
where
   vdisk [...]   One or more blank-separated voting file paths
   options       Options to be passed to the add command

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl delete -h
Usage:
  crsctl delete resource <resName> [...] [-f] [-i]
where
    resName [...]   One or more resource names to be deleted
    -f               Force option
    -i               Fail if request cannot be processed immediately

  crsctl delete type <typeName> [...] [-i]
where
    typeName [...]  One or more blank-separated resource type names
    -i              Fail if request cannot be processed immediately

  crsctl delete serverpool <spName> [...] [-i]
where
    spName [...]    One or more server pool names to be deleted
    -i              Fail if request cannot be processed immediately

  crsctl delete crs administrator -u <user_name> [-f]
where
    user_name       User name to be deleted from the admin list or "*"
    -f              Override user name validity check

  crsctl delete css votedisk {<vdiskGUID>[...]|<vdisk>[...]|+<diskgroup>}
where
    vdiskGUID [...]     One or more blank-separated voting file GUIDs
    vdisk [...]         One or more blank-separated voting file paths
    diskgroup           The name of a diskgroup containing voting files;
                        allowed only when clusterware is in exclusive mode

  crsctl delete node -n <nodename>
where
    nodename     Node to be deleted

[grid@dbrac1 ~]$ crsctl modify -h
Usage:
  crsctl modify {resource|type|serverpool} <name> <options>
where
    name         Name of the CRS entity
    options      Options to be passed to the modify command

    See individual CRS entity help for more details

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl getperm -h
Usage:
  crsctl getperm resource <resName> [[-u <user_name>]|[-g <group_name>]]
where
    resName    Get permissions for named resource
    -u         Get permissions for user name
    -g         Get permissions for group name

  crsctl getperm type <typeName> [[-u <user_name>]|[-g <group_name>]]
where
    typeName   Get permissions for named resource type
    -u         Get permissions for user name
    -g         Get permissions for group name

  crsctl getperm serverpool <spName> [[-u <user_name>]|[-g <group_name>]]
where
    spName     Get permissions for named server pool
    -u         Get permissions for user name
    -g         Get permissions for group name

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl setperm -h
Usage:
  crsctl setperm {resource|type|serverpool} <name> {-u <aclstring>|-x <aclstring>|-o <user_name>|-g <group_name>}
where
     -u           Update entity ACL
     -x           Delete entity ACL
     -o           Change entity owner
     -g           Change entity primary group

   ACL (Access Control List) string:

   { user:<user_name>[:<readPerm><writePerm><execPerm>]   |
     group:<group_name>[:<readPerm><writePerm><execPerm>] |
     other[::<readPerm><writePerm><execPerm>] }
where
   user         User ACL
   group        Group ACL
   other        Other ACL
   readPerm     Read permission ("r" grants, "-" forbids)
   writePerm    Write permission ("w" grants, "-" forbids)
   execPerm     Execute permission ("x" grants, "-" forbids)

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl check -h
Usage:
  crsctl check crs
     Check status of OHAS and CRS stack

  crsctl check cluster [[-all]|[-n <server>[...]]]
     Check status of CRS stack

  crsctl check ctss
    Check status of Cluster Time Synchronization Services

  crsctl check resource {<resName> [...]|-w <filter>} [-n <server>] [-k <cid>] [-d <did>]
     Check status of resources

  crsctl check css
     Check status of Cluster Synchronization Services

  crsctl check evm
     Check status of Event Manager

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl set -h
Usage:
  crsctl set crs activeversion [<version>] [-force]
    Sets the Oracle Clusterware operating version.

  crsctl set {log|trace} {mdns|gpnp|css|crf|crs|ctss|evm|gipc} "<name1>=<lvl1>,..."
    Set the log/trace levels for specific modules within daemons

  crsctl set log res <resname>=<lvl>
    Set the log levels for agents

  crsctl set css <parameter> <value>
    Sets the value of a Cluster Synchronization Services parameter

  crsctl set css {ipmiaddr|ipmiadmin} <value>
    Sets IPMI configuration data in the Oracle registry

  crsctl set css votedisk {asm <diskgroup>|raw <vdisk>[...]}
    Defines the set of voting disks to be used by CRS

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl get -h
Usage:
  crsctl get {log|trace} {mdns|gpnp|css|crf|crs|ctss|evm|gipc} "<name1>,..."
    Get the log/trace levels for specific modules

  crsctl get log res <resname>
    Get the log level for an agent

  crsctl get hostname
    Displays the host name

  crsctl get nodename
    Displays the node name

  crsctl get clientid dhcp -cluname <cluster_name> -viptype <vip_type> [-vip <VIPResName>] [-n <nodename>]
    Generate client ID's as used by RAC agents for configured cluster resources
where
    cluster_name    name of the cluster to be configured

    vip_type        Type of VIP resource: HOSTVIP, SCANVIP, or APPVIP
    VIPResName      User defined application VIP name (required for APPVIP vip_type)
    nodename        Node for which the client ID is required (required for HOSTVIP vip_type)

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl unset -h
Usage:
  crsctl unset css <parameter>
    Unsets the value of a Cluster Synchronization Services parameter, restoring its default value

    diagwait
    disktimeout
    misscount
    reboottime
    priority
    logfilesize

  crsctl unset css ipmiconfig
    Unsets the IPMI configuration and deletes the associated Oracle Registry entries.

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl debug -h
Usage:
  crsctl debug statedump {crs|css|evm}
 where
   crs           Cluster Ready Services
   css           Cluster Synchronization Services
   evm           Event Manager

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl lsmodules -h
Usage:
  crsctl lsmodules {mdns|gpnp|css|crf|crs|ctss|evm|gipc}
 where
   mdns  multicast Domain Name Server
   gpnp  Grid Plug-n-Play Service
   css   Cluster Synchronization Services
   crf   Cluster Health Monitor
   crs   Cluster Ready Services
   ctss  Cluster Time Synchronization Service
   evm   EventManager
   gipc  Grid Interprocess Communications

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl query -h
Usage:
  crsctl query crs administrator
     Display admin list

  crsctl query crs activeversion
     Lists the Oracle Clusterware operating version

  crsctl query crs releaseversion
     Lists the Oracle Clusterware release version

  crsctl query crs softwareversion [<nodename>| -all]
     Lists the version of Oracle Clusterware software installed
where
     Default         List software version of the local node
     nodename        List software version of named node
     -all            List software version for all the nodes in the cluster

  crsctl query css ipmiconfig
     Checks whether Oracle Clusterware has been configured for IPMI

  crsctl query css ipmidevice
     Checks whether the IPMI device/driver is present

  crsctl query css votedisk
     Lists the voting files used by Cluster Synchronization Services

  crsctl query wallet -type <wallet_type> [-name <name>] [-user <user_name>]
     Check if the designated wallet or user exists
where
     wallet_type     Type of wallet i.e. APPQOSADMIN, APPQOSUSER or APPQOSDB.
     name            Name is required for APPQOSDB wallets.
     user_name       User to be queried from wallet.

  crsctl query dns -servers
     Lists the system configured DNS server, search paths, attempt and timeout values

  crsctl query dns -name <name> [-dnsserver <DNS_server_address>] [-port <port>] [-attempts <attempts>] [-timeout <timeout>] [-v]
    Returns a list of addresses returned by DNS lookup of the name with the specified DNS server
Where
    name                Fully qualified domain name to lookup
    DNS_server_address  Address of the DNS server on which name needs to be looked up
    port                Port on which DNS server is listening
    attempts            Number of retry attempts
    timeout             Timeout in seconds

  crsctl query socket udp [-address <address>] [-port <port>]
     Verifies that a daemon can listen on specified address and port
Where
       address             IP address on which socket needs to be created
       port                port on which socket needs to be created

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl pin -h
Usage:
  crsctl pin css -n <node1>[...]
 Pin the nodes (make leases non-expiring).

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl unpin -h
Usage:
  crsctl unpin css -n <node1>[...]
 Unpin the nodes (allow leases to expire).

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl discover -h
Usage:
  crsctl discover dhcp -clientid <clientid> [-port <port>]
     Discover DHCP server
where
      clientid         client ID for which discovery will be attempted
      port             The port on which the discovery packets will be sent

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl release -h
Usage:
  crsctl release dhcp -clientid <clientid> [-port <port>]
     Release DHCP lease for the client ID specified
where
      clientid         client ID for which DHCP lease release request will be attempted
      port             The port on which the DHCP lease release packets will be sent

[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ crsctl request -h
Usage:
  crsctl request dhcp -clientid <clientid> [-port <port>]
     Request DHCP lease for the client ID specified
where
      clientid         client ID for which DHCP lease request will be attempted
      port             The port on which the DHCP lease request packets will be sent

[grid@dbrac1 ~]$


2.ocrcheck
[grid@dbrac1 ~]$ ocrcheck -h
Name:
        ocrcheck - Displays health of Oracle Cluster/Local Registry.

Synopsis:
        ocrcheck [-config] [-local]

  -config       Displays the configured locations of the Oracle Cluster Registry.
                This can be used with the -local option to display the configured
                location of the Oracle Local Registry
  -local        The operation will be performed on the Oracle Local Registry.

Notes:
        A log file will be created in
        $ORACLE_HOME/log/<hostname>/client/ocrcheck_<pid>.log.
        File creation privileges in the above directory are needed
        when running this tool.

[grid@dbrac1 ~]$


3.ocrconfig
[grid@dbrac1 ~]$ ocrconfig -h
Name:
        ocrconfig - Configuration tool for Oracle Cluster/Local Registry.

Synopsis:
        ocrconfig [option]
        option:
                [-local] -export <filename>
                                                    - Export OCR/OLR contents to a file
                [-local] -import <filename>         - Import OCR/OLR contents from a file
                [-local] -upgrade [<user> [<group>]]
                                                    - Upgrade OCR from previous version
                -downgrade [-version <version string>]
                                                    - Downgrade OCR to the specified version
                [-local] -backuploc <dirname>       - Configure OCR/OLR backup location
                [-local] -showbackup [auto|manual]  - Show OCR/OLR backup information
                [-local] -manualbackup              - Perform OCR/OLR backup
                [-local] -restore <filename>        - Restore OCR/OLR from physical backup
                -replace <current filename> -replacement <new filename>
                                                    - Replace an OCR device or file <current filename> with <new filename>
                -add <filename>                     - Add a new OCR device/file
                -delete <filename>                  - Remove a OCR device/file
                -overwrite                          - Overwrite OCR configuration on disk
                -repair -add <filename> | -delete <filename> | -replace <current filename> -replacement <new filename>
                                                    - Repair OCR configuration on the local node
                -help                               - Print out this help information

Note:
        * A log file will be created in
        $ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
        you have file creation privileges in the above directory before
        running this tool.
        * Only -local -showbackup [manual] is supported.
        * Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry.

[grid@dbrac1 ~]$


4.ocrdump
[grid@dbrac1 ~]$ ocrdump -h
Name:
        ocrdump - Dump contents of Oracle Cluster/Local Registry to a file.

Synopsis:
        ocrdump [-local] [<filename>|-stdout] [-backupfile <backupfilename>] [-keyname <keyname>] [-xml] [-noheader]

Description:
        Default filename is OCRDUMPFILE. Examples are:

        prompt> ocrdump
        writes cluster registry contents to OCRDUMPFILE in the current directory

        prompt> ocrdump MYFILE
        writes cluster registry contents to MYFILE in the current directory

        prompt> ocrdump -stdout -keyname SYSTEM
        writes the subtree of SYSTEM in the cluster registry to stdout

        prompt> ocrdump -local -stdout -xml
        writes local registry contents to stdout in xml format

        prompt> ocrdump -backupfile /oracle/CRSHOME/backup.ocr -stdout -xml
        writes registry contents in the backup file to stdout in xml format

Notes:
        The header information will be retrieved based on best effort basis.
        A log file will be created in
        $ORACLE_HOME/log/<hostname>/client/ocrdump_<pid>.log. Make sure
        you have file creation privileges in the above directory before
        running this tool.
        Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry.

[grid@dbrac1 ~]$


四、应用层
1.srvctl
[grid@dbrac1 ~]$ srvctl
Usage: srvctl <command> <object> [<options>]
    commands: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config|convert|upgrade
    objects: database|instance|service|nodeapps|vip|network|asm|diskgroup|listener|srvpool|server|scan|scan_listener|oc4j|home|filesystem|gns|cvu
For detailed help on each command and object and its options use:
  srvctl <command> -h or
  srvctl <command> <object> -h
[grid@dbrac1 ~]$

[grid@dbrac1 ~]$ srvctl enable -h

The SRVCTL enable command enables the named object so that it can run under Oracle Clusterware for automatic startup, failover, or restart.

Usage: srvctl enable database -d <db_unique_name> [-n <node_name>]
Usage: srvctl enable instance -d <db_unique_name> -i "<inst_name_list>"
Usage: srvctl enable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
Usage: srvctl enable asm [-n <node_name>]
Usage: srvctl enable listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl enable nodeapps [-g] [-v]
Usage: srvctl enable vip -i <vip_name> [-v]
Usage: srvctl enable scan [-i <ordinal_number>]
Usage: srvctl enable scan_listener [-i <ordinal_number>]
Usage: srvctl enable oc4j [-n <node_name>] [-v]
Usage: srvctl enable filesystem -d <volume_device>
Usage: srvctl enable diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl enable gns [-n <node_name>] [-v]
Usage: srvctl enable cvu [-n <node_name>]
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl disable -h

The SRVCTL disable command disables the named object so that it is unavailable to run under Oracle Clusterware for automatic startup, failover, or restart.

Usage: srvctl disable database -d <db_unique_name> [-n <node_name>]
Usage: srvctl disable instance -d <db_unique_name> -i "<inst_name_list>"
Usage: srvctl disable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
Usage: srvctl disable asm [-n <node_name>]
Usage: srvctl disable listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl disable nodeapps [-g] [-v]
Usage: srvctl disable vip -i <vip_name> [-v]
Usage: srvctl disable scan [-i <ordinal_number>]
Usage: srvctl disable scan_listener [-i <ordinal_number>]
Usage: srvctl disable oc4j [-n <node_name>] [-v]
Usage: srvctl disable filesystem -d <volume_device>
Usage: srvctl disable diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl disable gns [-n <node_name>] [-v]
Usage: srvctl disable cvu [-n <node_name>]
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl start -h

The SRVCTL start command starts, Oracle Clusterware enabled, non-running objects.

Usage: srvctl start database -d <db_unique_name> [-o <start_options>] [-n <node>]
Usage: srvctl start instance -d <db_unique_name> {-n <node_name> [-i <inst_name>] | -i <inst_name_list>} [-o <start_options>]
Usage: srvctl start service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-o <start_options>]
Usage: srvctl start nodeapps [-n <node_name>] [-g] [-v]
Usage: srvctl start vip { -n <node_name> | -i <vip_name> } [-v]
Usage: srvctl start asm [-n <node_name>] [-o <start_options>]
Usage: srvctl start listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl start scan [-i <ordinal_number>] [-n <node_name>]
Usage: srvctl start scan_listener [-n <node_name>] [-i <ordinal_number>]
Usage: srvctl start oc4j [-v]
Usage: srvctl start home -o <oracle_home> -s <state_file> -n <node_name>
Usage: srvctl start filesystem -d <volume_device> [-n <node_name>]
Usage: srvctl start diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl start gns [-l <log_level>] [-n <node_name>] [-v]
Usage: srvctl start cvu [-n <node_name>]
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl stop -h

The SRVCTL stop command stops, Oracle Clusterware enabled, starting or running objects.

Usage: srvctl stop database -d <db_unique_name> [-o <stop_options>] [-f]
Usage: srvctl stop instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>}  [-o <stop_options>] [-f]
Usage: srvctl stop service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-f]
Usage: srvctl stop nodeapps [-n <node_name>] [-g] [-f] [-r] [-v]
Usage: srvctl stop vip { -n <node_name>  | -i <vip_name> } [-f] [-r] [-v]
Usage: srvctl stop asm [-n <node_name>] [-o <stop_options>] [-f]
Usage: srvctl stop listener [-l <lsnr_name>] [-n <node_name>] [-f]
Usage: srvctl stop scan [-i <ordinal_number>] [-f]
Usage: srvctl stop scan_listener [-i <ordinal_number>] [-f]
Usage: srvctl stop oc4j [-f] [-v]
Usage: srvctl stop home -o <oracle_home> -s <state_file> -n <node_name> [-t <stop_options>] [-f]
Usage: srvctl stop filesystem -d <volume_device> [-n <node_name>] [-f]
Usage: srvctl stop diskgroup -g <dg_name> [-n "<node_list>"] [-f]
Usage: srvctl stop gns [-n <node_name>] [-f] [-v]
Usage: srvctl stop cvu [-f]
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl relocate -h

The SRVCTL relocate command relocates a running Oracle Clusterware managed object to another node or instance.

Usage: srvctl relocate database -d <db_unique_name> {[-n <target>] [-w <timeout>] | -a [-r]} [-v]
Usage: srvctl relocate service -d <db_unique_name> -s <service_name> {-i <old_inst_name> -t <new_inst_name> | -c <current_node> -n <target_node>} [-f]
Usage: srvctl relocate server -n "<server_list>" -g <pool_name> [-f]
Usage: srvctl relocate vip -i <vip_name> [-n <node_name>] [-f] [-v]
Usage: srvctl relocate scan -i <ordinal_number> [-n <node_name>]
Usage: srvctl relocate scan_listener -i <ordinal_number> [-n <node_name>]
Usage: srvctl relocate oc4j [-n <node_name>] [-v]
Usage: srvctl relocate gns [-n <node_name>] [-v]
Usage: srvctl relocate cvu [-n <node_name>]
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl status -h

The SRVCTL status command displays the current state of the object.

Usage: srvctl status database -d <db_unique_name> [-f] [-v]
Usage: srvctl status instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-f] [-v]
Usage: srvctl status service -d <db_unique_name> [-s "<service_name_list>"] [-f] [-v]
Usage: srvctl status nodeapps
Usage: srvctl status vip { -n <node_name> | -i <vip_name> } [-v]
Usage: srvctl status listener [-l <lsnr_name>] [-n <node_name>] [-v]
Usage: srvctl status asm [-n <node_name>] [-a] [-v]
Usage: srvctl status scan [-i <ordinal_number>] [-v]
Usage: srvctl status scan_listener [-i <ordinal_number>] [-v]
Usage: srvctl status srvpool [-g <pool_name>] [-a]
Usage: srvctl status server -n "<server_list>" [-a]
Usage: srvctl status oc4j [-n <node_name>] [-v]
Usage: srvctl status home -o <oracle_home> -s <state_file> -n <node_name>
Usage: srvctl status filesystem -d <volume_device> [-v]
Usage: srvctl status diskgroup -g <dg_name> [-n "<node_list>"] [-a] [-v]
Usage: srvctl status cvu [-n <node_name>]
Usage: srvctl status gns [-n <node_name>] [-v]
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl add -h

The SRVCTL add command adds the configuration and the Oracle Clusterware application to the OCR for the cluster database, named instances, named services, or for the named nodes.

Usage: srvctl add database -d <db_unique_name> -o <oracle_home> [-c {RACONENODE | RAC | SINGLE} [-e <server_list>] [-i <inst_name>] [-w <timeout>]] [-m <domain_name>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL | NORESTART}] [-g "<serverpool_list>"] [-x <node_name>] [-a "<diskgroup_list>"] [-j "<acfs_path_list>"]
Usage: srvctl add instance -d <db_unique_name> -i <inst_name> -n <node_name> [-f]
Usage: srvctl add service -d <db_unique_name> -s <service_name> {-r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}] | -g <pool_name> [-c {UNIFORM | SINGLETON}] } [-k   <net_num>] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <failover_retries>] [-w <failover_delay>] [-t <edition>] [-f]
Usage: srvctl add service -d <db_unique_name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"} [-f]
Usage: srvctl add nodeapps { { -n <node_name> -A <name|ip>/<netmask>/[if1[|if2...]] } | { -S <subnet>/<netmask>/[if1[|if2...]] } } [-e <em-port>] [-l <ons-local-port>]  [-r <ons-remote-port>] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
Usage: srvctl add vip -n <node_name> -k <network_number> -A <name|ip>/<netmask>/[if1[|if2...]] [-v]
Usage: srvctl add network [-k <net_num>] -S <subnet>/<netmask>/[if1[|if2...]] [-w <network_type>] [-v]
Usage: srvctl add asm [-l <lsnr_name>]
Usage: srvctl add listener [-l <lsnr_name>] [-s] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-o <oracle_home>] [-k <net_num>]
Usage: srvctl add scan -n <scan_name> [-k <network_number>] [-S <subnet>/<netmask>[/if1[|if2|...]]]
Usage: srvctl add scan_listener [-l <lsnr_name_prefix>] [-s] [-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]]
Usage: srvctl add srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"] [-f]
Usage: srvctl add oc4j [-v]
Usage: srvctl add filesystem -d <volume_device> -v <volume_name> -g <dg_name> [-m <mountpoint_path>] [-u <user>]
Usage: srvctl add gns -d <domain> -i <vip_name|ip> [-v]
Usage: srvctl add cvu [-t <check_interval_in_minutes>]
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl remove -h

The SRVCTL remove command removes the configuration, for the object from the Oracle Clusterware. Environment settings for the object are also removed.

Usage: srvctl remove database -d <db_unique_name> [-f] [-y]
Usage: srvctl remove instance -d <db_unique_name> -i <inst_name> [-f] [-y]
Usage: srvctl remove service -d <db_unique_name> -s <service_name> [-i <inst_name>] [-f]
Usage: srvctl remove nodeapps [-f] [-y] [-v]
Usage: srvctl remove vip -i "<vip_name_list>" [-f] [-y] [-v]
Usage: srvctl remove network {-k <network_number> | -a} [-f] [-v]
Usage: srvctl remove asm [-f]
Usage: srvctl remove listener [-l <lsnr_name> | -a] [-f]
Usage: srvctl remove scan [-f] [-y]
Usage: srvctl remove scan_listener [-f] [-y]
Usage: srvctl remove srvpool -g <pool_name>
Usage: srvctl remove oc4j [-f] [-v]
Usage: srvctl remove filesystem -d <volume_device> [-f]
Usage: srvctl remove diskgroup -g <dg_name> [-f]
Usage: srvctl remove gns [-f] [-v]
Usage: srvctl remove cvu [-f]
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl modify  -h

The SRVCTL modify command enables user to modify configuration of the object without removing and adding Oracle Clusterware resources. The changes takes effect when the application is restarted.

Usage: srvctl modify database -d <db_unique_name> [-n <db_name>] [-o <oracle_home>] [-u <oracle_user>] [-e <server_list>] [-w <timeout>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-y {AUTOMATIC | MANUAL | NORESTART}] [-g "<serverpool_list>" [-x <node_name>]] [-a "<diskgroup_list>"|-z] [-j "<acfs_path_list>"] [-f]
Usage: srvctl modify instance -d <db_unique_name> -i <inst_name> { -n <node_name> | -z }
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <avail_inst_name> -r [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -n -i "<preferred_list>" [-a "<available_list>"] [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> [-g <pool_name>] [-c {UNIFORM | SINGLETON}] [-P {BASIC|NONE}] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}] [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <integer>] [-w <integer>] [-t <edition>]
Usage: srvctl modify asm [-l <lsnr_name>]
Usage: srvctl modify nodeapps {[-n <node_name> -A <new_vip_address>/<netmask>[/if1[|if2|...]]] | [-S <subnet>/<netmask>[/if1[|if2|...]]]} [-u {static|dhcp|mixed}] [-e <em-port>] [ -l <ons-local-port> ] [-r <ons-remote-port> ] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
Usage: srvctl modify listener [-l <lsnr_name>] [-o <oracle_home>] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-u <oracle_user>] [-k <net_num>]
Usage: srvctl modify network [-k <network_number>] [-S <subnet>/<netmask>[/if1[|if2...]]] [-w <network_type>] [-v]
Usage: srvctl modify scan -n <scan_name>
Usage: srvctl modify scan_listener {-u|-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]}
Usage: srvctl modify srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"] [-f]
Usage: srvctl modify oc4j -p <oc4j_rmi_port> [-v] [-f]
Usage: srvctl modify filesystem -d <volume_device> -u <user>
Usage: srvctl modify gns {-l <log_level> | [-i <ip_address>] [-N <name> -A <address>] [-D <name> -A <address>] [-c <name> -a <alias>] [-u <alias>] [-r <address>] [-V <name>] [-p <parameter>:<value>[,<parameter>:<value>...]] [-F <forwarded_domains>] [-R <refused_domains>] [-X <excluded_interfaces>] [-v]}
Usage: srvctl modify cvu -t <check_interval_in_minutes>
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl getenv -h

The SRVCTL getenv command Gets and displays values for the environment from the configuration file. It allows users to administer environment configuration for the objects.

Usage: srvctl getenv database -d <db_unique_name> [-t "<name_list>"]
Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-t "<name_list>"]
Usage: srvctl getenv vip -i <vip_name> [-t "<name_list>"]
Usage: srvctl getenv listener [-l <lsnr_name>] [-t <name>[, ...]]
Usage: srvctl getenv asm [-t <name>[, ...]]
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl setenv -h

The SRVCTL setenv command sets values for the environment in the configuration file. It allows users to administer environment configuration for the objects.

Usage: srvctl setenv database -d <db_unique_name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}
Usage: srvctl setenv nodeapps {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"} [-v]
Usage: srvctl setenv vip -i <vip_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"} [-v]
Usage: srvctl setenv listener [-l <lsnr_name>] -t "<name>=<val> [,...]" | -T "<name>=<value>"
Usage: srvctl setenv asm -t "<name>=<val> [,...]" | -T "<name>=<value>"
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl unsetenv -h

The SRVCTL unsetenv command unsets values for the environment in the configuration file. It allows users to administer environment configuration for the objects.

Usage: srvctl unsetenv database -d <db_unique_name> -t "<name_list>"
Usage: srvctl unsetenv nodeapps -t "<name_list>" [-v]
Usage: srvctl unsetenv vip -i <vip_name> -t "<name_list>" [-v]
Usage: srvctl unsetenv listener [-l <lsnr_name>] -t "<name>[, ...]"
Usage: srvctl unsetenv asm -t "<name>[, ...]"
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl config -h

The SRVCTL config command displays the configuration for the object stored in the OCR.

Usage: srvctl config database [-d <db_unique_name> [-a] ] [-v]
Usage: srvctl config service -d <db_unique_name> [-s <service_name>] [-v]
Usage: srvctl config nodeapps [-a] [-g] [-s]
Usage: srvctl config vip { -n <node_name> | -i <vip_name> }
Usage: srvctl config network [-k <network_number>]
Usage: srvctl config asm [-a]
Usage: srvctl config listener [-l <lsnr_name>] [-a]
Usage: srvctl config scan [-i <ordinal_number>]
Usage: srvctl config scan_listener [-i <ordinal_number>]
Usage: srvctl config srvpool [-g <pool_name>]
Usage: srvctl config oc4j
Usage: srvctl config filesystem -d <volume_device>
Usage: srvctl config gns [-a] [-d] [-k] [-m] [-n <node_name>] [-p] [-s] [-V] [-q <name>] [-l] [-v]
Usage: srvctl config cvu
For detailed help on each command and object and its options use:
  srvctl <command> <object> -h
Usage (for backward compatibility): srvctl config
Usage (for backward compatibility): srvctl config -p <dbname> -n <node>
Usage (for backward compatibility): srvctl config -V
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl convert -h

The SRVCTL convert command enables the user to convert from RAC One Node database to RAC database and vice versa.

Usage: srvctl convert database -d <db_unique_name> -c RAC [-n <node>]
Usage: srvctl convert database -d <db_unique_name> -c RACONENODE [-i <inst_name>] [-w <timeout>]
[grid@dbrac1 ~]$
[grid@dbrac1 ~]$ srvctl upgrade -h

Upgrades the resources types and resources from an older version to a newer version.

Usage: srvctl upgrade model -s <source-version> -d <destination-version> -p {first|last} [-e <name>=<value>[,<name>=<value>, ...]
Usage: srvctl upgrade database -d <db_unique_name> -o <oracle_home>
[grid@dbrac1 ~]$


2.onsctl
[grid@dbrac1 ~]$ onsctl
ERROR!

usage: onsctl [verbose] <command> [<options>]

The verbose option enables debug tracing and logging (for the server start).

Permitted <command>/<options> combinations are:

command   options
-------   ---------
start                       - Start ons
shutdown                    - Shutdown ons
reload                      - Trigger ons to reread its configuration file
debug     [<attr>=<val> ..] - Display ons server debug information
set       [<attr>=<val> ..] - Set ons log parameters
query     [<attr>=<val>]    - Query ons log parameters
ping      [<max-retry>]     - Ping local ons
help                        - Print brief usage description (this)
usage     [<command>]       - Print detailed usage description

[grid@dbrac1 ~]$


3.crs_stat
[grid@dbrac1 ~]$ crs_stat -h
This command is deprecated and has been replaced by 'crsctl status resource'
This command remains for backward compatibility only

Usage:  crs_stat [resource_name [...]] [-v] [-l] [-q] [-c cluster_member]
        crs_stat [resource_name [...]] -t [-v] [-q] [-c cluster_member]
        crs_stat -p [resource_name [...]] [-q]
        crs_stat [-a] application -g
        crs_stat [-a] application -r [-c cluster_member]
        crs_stat -f [resource_name [...]] [-q] [-c cluster_member]
        crs_stat -ls [resource_name [...]] [-q]

[grid@dbrac1 ~]$

阅读原文内容投诉

免责声明:

① 本站未注明“稿件来源”的信息均来自网络整理。其文字、图片和音视频稿件的所属权归原作者所有。本站收集整理出于非商业性的教育和科研之目的,并不意味着本站赞同其观点或证实其内容的真实性。仅作为临时的测试数据,供内部测试之用。本站并未授权任何人以任何方式主动获取本站任何信息。

② 本站未注明“稿件来源”的临时测试数据将在测试完成后最终做删除处理。有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341

软考中级精品资料免费领

  • 历年真题答案解析
  • 备考技巧名师总结
  • 高频考点精准押题
  • 2024年上半年信息系统项目管理师第二批次真题及答案解析(完整版)

    难度     807人已做
    查看
  • 【考后总结】2024年5月26日信息系统项目管理师第2批次考情分析

    难度     351人已做
    查看
  • 【考后总结】2024年5月25日信息系统项目管理师第1批次考情分析

    难度     314人已做
    查看
  • 2024年上半年软考高项第一、二批次真题考点汇总(完整版)

    难度     433人已做
    查看
  • 2024年上半年系统架构设计师考试综合知识真题

    难度     221人已做
    查看

相关文章

发现更多好内容

猜你喜欢

AI推送时光机
位置:首页-资讯-数据库
咦!没有更多了?去看看其它编程学习网 内容吧
首页课程
资料下载
问答资讯