Deploy a database connector

This guide is intended for Google Cloud Search database connector administrators, that is, anyone who is responsible for obtaining, deploying, configuring, and maintaining the database connector.

Contents of this guide include instructions for performing key tasks related to connector deployment:

  • Download the Cloud Search Connector for Databases software
  • Configure the connector for use with a specific SQL data source
  • Run the connector

To understand the concepts in this document, you should be familiar with database concepts, Structured Query Language (SQL), fundamentals of Access Control Lists (ACLs), and Windows or LInux operating systems.

Information about the tasks that the G Suite administrator must perform to map Google Cloud search to the database connector do not appear in this guide. For information on those tasks, see Manage third-party data sources.

Overview of the Google Cloud Search database connector

By default, Cloud Search can discover, index, and serve content from G Suite data such as docs and sheets. However, you can extend Cloud Search to discover and index data from your own database repositories by using the Google Cloud Search database connector.

The connector uploads indexing requests to the Cloud Search Indexing API and keeps the Cloud Search index in sync with the third party database repository by periodically reindexing the entire database.

The Cloud Search database connector supports controlling users' access to documents in search results by using ACLs. For more information on this topic, see Access Control List options.

Connector behavior

As the connector administrator, you control the Cloud Search database connector behavior by creating its configuration file. In this file, you define the following primary aspects of the connector behavior:

  • Accessing the target database
  • Identifying searchable content
  • Performing traversals
  • Observing traversal schedules
  • Sending SQL queries to the database to retrieve records
  • Respecting Access Control Lists (ACLs)

To define specific connector behavior, populate the configuration file with key/value pairs for each configuration parameter that you want to customize. For detailed information about this process, see Configure the database connector.

After you have completely populated the configuration file, you have the necessary settings to deploy the connector.

Database content indexing

After you deploy the Cloud Search database connector, it communicates with the data source that you connected to your G Suite account and discovers its content through a process called traversal. During traversal, the connector issues SQL select queries to the repository to retrieve document data. The connector retrieves the document data and uploads this to the Indexing API where it gets indexed and ultimately served to your users.

Initially, the database connector performs a full traversal, during which it reads and indexes every database record. You can schedule subsequent full traversals on a fixed-time interval. In addition to full traversals, you can schedule incremental traversals if your database supports it. Incremental traversals only read and re-index modified database records.

Connector setup

You can install and run the Cloud Search database connector in almost any environment where Java apps can run, so long as the connector has access to both the Internet and the database. Because you deploy the database connector on a host that is separate from Google Cloud Search or the data repository, you must first ensure that you have the G Suite and database information required to establish relationships between Google Cloud Search, the connector, and the data repository. To enable the connector to access the database, you provide specific information to the connector during the configuration steps described in this document in Connector behavior.

To set up the connector's access to Cloud Search, you need a service account, a service account ID, and a data source ID. You need to add the source ID and the path to the service account private key file to the connector configuration, as shown in the configuration file example. Typically the G Suite administrator for the domain can supply these credentials for you.

After you ensure that your environment is correctly set up, then you can begin the deployment steps.

Supported databases

The Cloud Search database connector works with any SQL database with a JDBC 4.0 or later compliant driver, including the following:

  • MS SQL Server (2008, 2012, 2014, 2016)
  • Oracle (11g, 12c)
  • Google Cloud SQL
  • MySQL

You provide information that identifies the target database during the connector configuration process. For detailed information, see Database access.

Before you deploy the database connector

Before you deploy the Cloud Search database connector, ensure that your environment has all of the following required components:

  • G Suite private key (which contains the service account id)
  • G Suite data source ID
  • Database connector .jar file installed on a host machine
  • Supported target database
  • SQL driver used to access the database

Deployment steps

To deploy the Cloud Search database connector, follow these basic steps:

  1. Download and save the Cloud Search database connector software.
  2. Configure the Cloud Search database connector.
  3. Run the Cloud Search database connector.

Step 1. Download and save the database connector software

Google provides the installation software for the connector in the following file:

    google-cloudsearch-database-connector-v1-0.0.2.zip

Download the Database connector and extract it to a local working directory where the connector runs. This directory can also contain all the relevant files required for execution, including the configuration file, service account key file, and optionally the JDBC driver jar file.

Step 2. Configure the database connector

For the connector to properly access a database and index the relevant content, you must first create its configuration file.

To create a configuration file:

  1. Open a text editor of your choice.
  2. Add key/value pairs to the file contents. For guidance, see the configuration file example.
  3. Name the configuration file.

    You can give the configuration file any name, for example: Mysql.properties. Google recommends consistency in naming configuration file extensions by using .properties or .config

Because you can specify the configuration file path on the command line, a standard file location is not necessary. However, keep the configuration file in the same directory as the connector to simplify tracking and running the connector.

To ensure the connector recognizes your configuration file, specify its path on the command line. Otherwise, the connector uses connector-config.properties in your local directory as the default file name. For information about specifying the configuration path on the command-line, see Run the database connector.

Configuration file example

The following example configuration file shows the parameter key/value pairs that define an example connector's behavior. Many parameters have defaults that are used if the parameter value is not defined within the file.

#
# data source access
api.sourceId=1234567890abcdef
api.identitySourceId=0987654321lmnopq
api.serviceAccountPrivateKeyFile=./PrivateKey.json
#
# database access
db.url=jdbc:mysql://localhost:3306/mysql_test
db.user=root
db.password=passw0rd
#
# traversal SQL statements
db.allRecordsSql=select customer_id, first_name, last_name, phone, change_timestamp from address_book
db.incrementalUpdateSql=select customer_id, first_name, last_name, phone, change_timestamp from address_book where change_timestamp > ?
#
# schedule traversals
schedule.traversalIntervalSecs=36000
schedule.performTraversalOnStart=true
schedule.incrementalTraversalIntervalSecs=3600
#
# column definitions
db.allColumns=customer_id, first_name, last_name, phone, change_timestamp
db.uniqueKeyColumns=customer_id
url.columns=customer_id
#
# content fields
contentTemplate.db.title=customer_id
db.contentColumns=customer_id, first_name, last_name, phone
#
# setting ACLs to "entire domain accessible"
defaultAcl.mode=fallback
defaultAcl.public=true

For detailed descriptions of each parameter, see the Configuration parameters reference.

Step 3. Run the database connector

The following example assumes the required components are located in the local directory on a Linux system.

To run the connector from the command line, type the following command:

java -cp "google-cloud-search-database-connector-[version number]:mysql-connector-java-5.1.41-bin.jar"
com.google.enterprise.cloudsearch.database.DatabaseFullTraversalConnector -Dconfig=mysql.config

Command details:

  • google-cloud-search-database-connector-[version number].jar is the database connector .jar file
  • mysql-connector-java-5.1.41-bin.jar is the SQL driver being use to access the database
  • mysql.config is the configuration file

The connector attempts to detect configuration errors as early as possible. For example, if a database column is defined as part of the record content, but it is missing from the SQL query of the database, then the missing column will be called out as an error when the connector initializes.

However, the connector only detects other errors, such as invalid SQL statement syntax when it attempts to access the database for the first traversal.

Configuration parameters reference

The following sections describe the configuration parameters in detail. In these sections, the in-line examples are in this format:

key = value

Where "key" (in bold) is the specific parameter's literal name and "value" (in italics) is the specific value of the parameter.

Data source access

The first parameters every configuration file must specify are the ones necessary to access the Cloud Search data source. The steps required to set up a data source can be found at Manage third-party data sources.

Setting Parameter
Data source id api.sourceId = 1234567890abcdef

Required. The Cloud Search source ID set up by the G Suite administrator.

Identity source id api.identitySourceId = 0987654321lmnopq

Required if using external users and groups. The Cloud Search identity source ID set up by the G Suite administrator.

Service account api.serviceAccountPrivateKeyFile = ./PrivateKey.json

Required. The Cloud Search service account key file that was created by the G Suite administrator for connector accessibility.

Database access

Before the connector can traverse a database, you must identify the path to the database and the credentials that enable the connector to sign in to it. Use the following database access parameters to add access information to the configuration file.

Setting Parameter
Database URL db.url = jdbc:mysql://127.0.0.1/dbname

Required. The full path of the database to be accessed.

Database username and password db.user = dbadmin
db.password = pas5w0rd

Required. A valid username and password that the connector uses to access the database. This database user must have read access to the relevant records of the database being read.

JDBC driver db.driverClass = oracle.jdbc.OracleDriver

Required only if the JDBC 4.0 driver is not specified in the class path on most systems.

Configure traversal SQL statements

The connector accesses and indexes database records by performing traversals. To enable the connector to traverse database records, you must provide SQL select queries in the configuration file. The connector supports two types of traversal methods:

The connector executes these traversals according to the schedules you define in the scheduling options of the configuration file, described in Schedule Traversals below.

Full traversal

A full traversal reads every database record configured for indexing. A full traversal is required to index new records for Cloud Search and also to re-index all existing records.

Setting Parameter
Full traversal db.allRecordsSql = SELECT customer_id, first_name, last_name, employee_id, interesting_field FROM employee

OR

db.allRecordsSql = SELECT customer_id, first_name, last_name, employee_id, interesting_field FROM employee ORDER BY key OFFSET ? ROWS FETCH FIRST 1000 ROWS ONLY

Required. This example includes a SQL select query that reads every record of interest in an employee database for indexing.

When using pagination by offset, the SQL query must have a placeholder ("?") for a row offset, starting with zero. The query will be executed multiple times in each full traversal, until no results are returned.

db.allRecordsSql.pagination = offset

Valid pagination options are:

  • none: do not use pagination
  • offset: use pagination by row offset

Every column name that the connector will use in any capacity (content, unique id, ACL's) must be present in this query. The connector performs some preliminary verifications at startup to detect errors and omissions. For this reason, do not use a general "SELECT * FROM …" query.

Pagination examples

To specify pagination by offset, and break up a full traversal into multiple queries:
# For SQL Server 2012 or Oracle 12c (standard SQL 2008 syntax)
db.allRecordsSql = SELECT customer_id, first_name, last_name, employee_id, interesting_field \
    FROM employee \
    ORDER BY key OFFSET ? ROWS FETCH FIRST 1000 ROWS ONLY
db.allRecordsSql.pagination = offset

or

# For MySQL or Google Cloud SQL
db.allRecordsSql = SELECT customer_id, first_name, last_name, employee_id, interesting_field \
    FROM employee \
    ORDER BY key LIMIT 1000 OFFSET ?
db.allRecordsSql.pagination = offset

Incremental traversal

By default, the connector does not perform incremental traversals. However, if your database contains timestamp fields to indicate modified records, you can configure the connector to perform incremental traversals, which read and re-index only newly modified database records and recent entries to the database. Because an incremental traversal reads a smaller data set, it can be much more efficient than a full traversal.

The incremental traversal parameters define the scope of the traversal and identify the database timestamp used to identify new record additions or newly modified database records.

Setting Parameter
Incremental traversal db.incrementalUpdateSql = select customer_id, first_name, last_name, employee_id, interesting_field, last_update_time from employee where last_update_time > ?

OR

db.incrementalUpdateSql = select customer_id, first_name, last_name, employee_id, interesting_field, last_update_time as timestamp_column from employee where last_update_time > ?

Required when using incremental traversals. To track the database time stamp column for the last update time, add the timestamp_column to the SQL statement; otherwise, use the current timestamp of the connector traversal.

The example SQL select query reads every record that is modified and must be re-indexed.

The mandatory "?" in the query is a placeholder for a timestamp value that the connector tracks and maintains between incremental traversal SQL queries.

By default, the connector stores the start time of the incremental query for use on the following incremental traversal. If no previous incremental traversal has ever occurred, the start time of the connector execution is used.

After the first incremental traversal, Cloud Search stores the timestamp so that connector restarts are able to access the previous incremental traversal timestamp.

Database timezone db.timestamp.timezone = America/Los_Angeles

Specifies the time zone to use for database timestamps. The default is the local time zone where the connector is running.

Schedule traversals

The scheduling parameters determine how often the connector waits between traversals.

Setting Parameter
Full traversal after a specified interval schedule.traversalIntervalSecs = 7200

Specifies the interval between traversals, expressed in seconds. The default value is 86400 (number of seconds in one day).

Full traversal at connector startup schedule.performTraversalOnStart = false

Specifies that the first full traversal should occur immediately upon each connector startup (true) or not (false). The default value is true.

Incremental traversal after a specified interval schedule.incrementalTraversalIntervalSecs = 900

Specifies the interval between traversals, expressed in seconds. The default value is 300 (number of seconds in 5 minutes). This parameter is not used if the incremental traversal SQL is not defined.

Column definitions

For the connector to access and index database records, you must provide information about column definitions in the configuration file. The connector also uses these column definitions to detect configuration errors at connector startup.

Setting Parameter
All columns db.allColumns = customer_id, first_name, last_name, employee_id, interesting_field, last_update_time, linked_url

Required. Identifies all the columns that are required in a SQL query when accessing the database. The columns defined with this parameter must be explicitly referenced in the queries. Every other column definition parameter is checked against this set of columns.

Unique key columns db.uniqueKeyColumns = customer_id
db.uniqueKeyColumns = last_name, first_name

Required. Lists either a single database column that contains unique values or by a combination of columns whose values together define a unique id.

Cloud Search requires every searchable document to have a unique identifier within a data source. For this reason, each database record must be able to use its column values to define a unique id. Take extra care to provide a unique ID across all documents if running multiple connectors on separate databases but indexing into a common data set.

URL columns url.format = https://www.example.com/{0}

Defines the format of the view URL. Numbered parameters refer to the columns specified in db.columns, in order, starting with zero.

If not specified, the default is "{0}."

url.columns = customer_id

Required. Specifies valid, defined name(s) of the column(s) used for the URL used for a clickable search result. For databases that have no relevant URL associated with each database record, a static link can be used for every record.

However, if the column values do define a valid link for each record, the view URL columns and format configuration values should be specified.

url.columnsToEscape = customer_id

Specifies columns from db.columns whose values will be percent-encoded before including them in the formatted URL string.

URL column examples

To specify the column(s) used and the format of the view URL:

# static URL not using any database record values
url.format = https://www.example.com
url.columns = customer_id

or

# single column value that is the view URL
url.format = {0}
url.columns = customer_url

or

# single column value that will be substituted into the view URL at position {0}
url.format = https://www.example.com/customer/id={0}
url.columns = customer_id
url.columnsToEscape = customer_id

or

# multiple column values used to build the view URL (columns are order dependent)
url.format = {1}/customer={0}
url.columns = customer_id, linked_url
url.columnsToEscape = customer_id

Metadata Configuration Parameters

Metadata configuration parameters describes the database columns used for populating item metadata. If the configuration file does not contain these parameters, default values are used. The following table shows these parameters.
Setting Parameter
Title itemMetadata.title.field=movieTitle
itemMetadata.title.defaultValue=Gone with the Wind
The metadata attribute that contains the value corresponding to the document title. The default value is an empty string.
Created timestamp itemMetadata.createTime.field=releaseDate
itemMetadata.createTime.defaultValue=1940-01-17
The metadata attribute that contains the value for the document creation timestamp.
Last modified time itemMetadata.updatetime.field=releaseDate
itemMetadata.updatetime.defaultValue=1940-01-17
The metadata attribute that contains the value for the last modification timestamp for the document.
Document language itemMetadata.contentLanguage.field=languageCode
itemMetadata.contentLanguage.defaultValue=en-US
The content language for documents being indexed.
Schema object type itemMetadata.objectType=movie
The object type used by the connector, as defined in the create and register a schema. The connector won't index any structured data if this property is not specified.

If applicable, properties of this schema object should be specified in the SQL queries defined in the configuration. This might most often be accomplished by adding aliases to the SQL statement(s). For example, suppose for a movie database, the data source schema contains a property definition named "ActorName," a SQL statement could have the form: select …, last_name as ActorName, … from … .

Every column that matches a property name in the schema object is automatically passed on with the indexed database record and used as structured data in the data source.

Note: This configuration property points to a value rather than a metadata attribute, and the .field and .defaultValue sufffixes are not supported.

Datetime formats

Datetime formats specify the formats expected in metadata attributes. If the configuration file does not contain this parameter, default values are used. The following table shows this parameter.
Setting Parameter
Additional datetime formats structuredData.dateTimePatterns=MM/dd/uuuu HH:mm:ssXXX
A semicolon-separated list of additional java.time.format.DateTimeFormatter patterns. The patterns are used when parsing string values for any date or date-time fields in the metadata or schema. The default value is an empty list, but RFC 3339 and RFC 1123 formats are always supported.

Content fields

The benefit of indexing database record values into Cloud Search is that they can be made searchable. Use the content options to define which record values should be made part of the searchable content.

Setting Parameter
Content data columns db.contentColumns = customer_id, first_name, last_name

Specifies content columns in the database. All columns that you designate as content columns are formatted and uploaded to Cloud Search as searchable document content.

If you don't specify a value, the default is "*" indicating that all columns should be used for content.

Content template columns contentTemplate.db.title = customer_id

Required. The content data columns are formatted for indexing based on a content template. The template defines the priority of each data column value for searching. The highest quality column definition is the required "title" column.

contentTemplate.db.quality.high = first_name, last_name
contentTemplate.db.quality.medium = interesting_field
contentTemplate.db.quality.low = employee_id

You can designate all the other content columns as high, medium or low search quality fields. Any content columns not defined in a specific category defaults to low.

Blob column db.blobColumn = blob_data

Indicates the name of a single blob column to use for document content instead of a combination of content columns.

If a blob column is specified, then it is considered an error if content columns are also defined. However, metadata and structured data column definitions are still allowed along with blob columns.

Access control list options

There are multiple options for protecting user access to indexed records with ACLs.

Setting Parameter
Entire domain defaultAcl.mode = override
defaultAcl.public = true

Valid modes are:

  • none: do not use default ACL
  • fallback: use default ACL only if no ACL already present
  • append: add default ACL to existing ACL
  • override: replace existing ACL with default ACL

If defaultAcl.mode is set to override and defaultAcl.public is set to true, these parameters specify "entire domain" access, where every Indexed database record is publicly accessible by every user in the domain. The mode value determines when to apply the public ACL.

If defaultAcl.mode is set to none, records will be unsearchable without defined individual ACLs.

Common defined ACL defaultAcl.mode = fallback
defaultAcl.public = false
defaultAcl.readers.users = user1, user2,
google:user3
defaultAcl.readers.groups = google:group1, group2
defaultAcl.denied.users = user4, user5
defaultAcl.denied.groups = group3

If you set all these parameters, the entire set specifies a "common defined" ACL to use for each record of the database if the database record does not have an individual ACL definition. This common ACL is used to control access across the entire database depending on the mode selected.

Every user and group is assumed to be a local domain defined user/group unless prefixed with "google:" (literal constant).

Individual ACL If the configuration parameters specify individual ACLs, each record contains its own ACL information within its column values.

If each database record contains individual ACL information that is intended for its own accessibility, then the SQL query must contain reserved literal constant column aliases so the connector knows how to retrieve the "readers" and "denied" users. If the following reserved literals are present in the SQL query(ies), then no additional configuration is required.

readers_users

readers_groups

denied_users

denied_groups

Individual ACL SQL query examples

The following examples show SQL select queries that use "individual" ACLs:

db.allRecordsSql = select customer_id, first_name, last_name,  employee_id, interesting_field, last_update_time, permitted_readers as readers_users, denied_readers as denied_users, permitted_groups as readers_groups, denied_groups as denied_groups from employee

db.incrementalUpdateSql = select customer_id, first_name, last_name,  employee_id, interesting_field, last_update_time, permitted_readers as readers_users, denied_readers as denied_users, permitted_groups as readers_groups, denied_groups as denied_groups from employee where last_update_time > ?

Quick reference

The following table lists the most important required and optional parameters that pertain to the database connector, as well as their default values.

Parameter Description
db.driverClass Default: Empty string

The JDBC driver for the connector:

db.driverClass = oracle.jdbc.OracleDriver

db.url Required

Sets the database URL:

db.url = jdbc:mysql://localhost:3306/dbname
db.user Default: Empty string

The database user that the connector uses to query the database:

db.user = dbadmin

db.password Default: Empty string

The password for the database user that the connector uses to query the database:

db.password = pas5w0rd
db.allRecordsSql Required

Define a SQL query to retrieve all relevant columns of the record in the database:

db.allRecordsSql = select customer_id, first_name, last_name, employee_id, interesting_field from employee
db.allRecordsSql.pagination Default: none

Specifies one of the following pagination options:

  • none: do not use pagination
  • offset: use pagination by row offset

db.allRecordsSql.pagination = offset

db.incrementalUpdateSql Default: Disabled (empty string)

Incremental traversal query to retrieve recently changed documents usually by their timestamp:

db.incrementalUpdateSql = select customer_id, first_name, last_name, employee_id, interesting_field, last_update_time from employee where last_update_time > ?

db.timestamp.timezone Default: Uses same timezone (empty string)

Specifies the DB server's time zone when there is a difference between the database server and the connector's time zones:

db.timestamp.timezone = America/Los_Angeles
schedule.
traversalIntervalSecs
Default: 86400 (seconds in a day)

Full traversal interval - the connector's traversal() method is called on the following schedule:

schedule.traversalIntervalSecs = 7200
schedule.
performTraversalOnStart
Default: true

Invoke a full traversal at start up:

schedule.performTraversalOnStart = false
schedule.
incrementalTraversalIntervalSecs
Default: 300

Number of seconds between invocations of the incremental traversal for modified records, (requires db.updataSql):

schedule.incrementalTraversalIntervalSecs = 900
db.allColumns Required

All the column names and aliases in the main SQL query that will be used in any other column definition:

db.allColumns = customer_id, first_name, last_name, employee_id, interesting_field, last_update_time, linked_url
db.uniqueKeyColumns Required

One or more column heading names (of the form name:type) separated by commas that provide a unique identifier for a database query result:

db.uniqueKeyColumns = customer_id
url.format Default: {0}

Specifies the format of the URL columns:

url.format = https://www.example.com/employee/id={0}
url.columns Required

Specifies the column(s) of a SQL query that will used to create a viewable URL for search results:

url.columns = customer_id
url.columnsToEscape Default: empty string

Specifies columns from db.columns whose values will be percent-encoded before including them in the formatted URL string:

url.columnsToEscape = customer_id
itemMetadata.title.field Default: empty string

Specifies the record's column that will be used for the metadata "title":

itemMetadata.title.field = customer_id
itemMetadata.createTime.field Default: empty string

Specifies the record's column to use for the metadata "creation date":

itemMetadata.createTime.field = created_timestamp
itemMetadata.updatetime.field Default: empty string

Specifies the record's column to use for the metadata "modified date":

itemMetadata.updatetime.field = last_update_time
itemMetadata.contentLanguage.field Default: empty string

Specifies the record's column to use for the metadata "language":

itemMetadata.contentLanguage.field = language_used
itemMetadata.objectType Default: empty string

Specifies the record's column to use for the schema "object". Note: This is a literal name and not a column value:

itemMetadata.objectType = schema_object_name
db.contentColumns Default: * (all columns from db.allRecords.Sql)

Define the columns of a SQL query to use to retrieve database record content:

db.contentColumns = customer_id, first_name, last_name
contentTemplate.db.title Required

Specifies the content HTML title and highest search quality field:

contentTemplate.db.title = id
contentTemplate.db.quality.high Default: empty string

Specifies the content fields given a High search quality value:

contentTemplate.db.quality.high = first_name, last_name
contentTemplate.db.quality.medium Default: empty string

Specifies the content fields given a Medium search quality value:

contentTemplate.db.quality.medium = interesting_field
contentTemplate.db.quality.low Default: all non-specified fields default to low search quality value

Specifies the content fields given a Low search quality value:

contentTemplate.db.quality.low = employee_id
db.blobColumn Default: empty string

Specifies that the database uses a single BLOB column for record content:

db.blobColumn=blob_data
defaultAcl.mode Default: none

Specifies one of the following ACL modes:

  • none: do not use default ACL
  • fallback: use default ACL only if no ACL already present
  • append: add default ACL to existing ACL
  • override: replace existing ACL with default ACL

defaultAcl.mode = override

defaultAcl.public Default: false

Specifies that the default ACL used for the entire repository is public:

defaultAcl.public=true
defaultAcl.readers.users Only used if defaultAcl.mode is set to fallback, and defaultAcl.public is set to false.

Specifies the common ACL readers in a comma delimited list:

defaultAcl.readers.users=user1,user2,user3
defaultAcl.readers.groups Only used if defaultAcl.mode is set to fallback, and defaultAcl.public is set to false.

Specifies the common ACL group readers in a comma-delimited list:

defaultAcl.readers.groups=group1,group2
defaultAcl.denied.users Only used if defaultAcl.mode is set to fallback, and defaultAcl.public is set to false.

Specifies the denied users for the entire repository:

defaultAcl.denied.users=user4,user5
defaultAcl.denied.groups Only used if defaultAcl.mode is set to fallback, and defaultAcl.public is set to false.

Specifies the permitted groups for the entire repository:

defaultAcl.denied.groups=group3
defaultAcl.name Default: DEFAULT_ACL_VIRTUAL_CONTAINER

Specifies the name of the virtual container to which the default ACL is applied:

defaultAcl.name = employee-db-default-acl
traverse.updateMode Default: SYNCHRONOUS

Specifies that traversals use synchronous update mode (versus asynchronous updates):

traverse.updateMode = ASYNCHRONOUS
traverse.exceptionHandler Default: 0

Specifies if the traversal should ignore ("ignore"), always abort ("0"), or abort after # exceptions are encountered ("10"):

traverse.exceptionHandler = ignore

Send feedback about...

Cloud Search
Cloud Search