Skip to main content

Updating Queries and Datasets in Google Sheets (BigQuery Integration)

When working with Onboarding.online data inside Google Sheets, you’re not limited to the default query. You can update the SQL query, switch datasets, or apply new filters at any time. The results will update automatically inside your spreadsheet.

A
Written by Alexander Karpovich
Updated over 3 weeks ago

Changing the Query or Dataset

To update the query or connect to a different dataset:

  1. Go to Connected Sheet 1.

  2. At the top of the screen, click “Connection settings”.

  3. On the right-hand panel, you will see:

    • The current dataset,

    • The query being used to fetch data.

You can:

  • Replace the query with any other valid BigQuery SQL query,

  • Switch to a different dataset,

  • Modify filters or limits.

Once you save changes, Sheet1 will automatically refresh with the new query results.


Typical Workflow

Default state after export

  • A pre-built query fetches data from a default table, e.g., web_events.

  • The results appear immediately in Sheet1.


Modifying the query

  1. Go to Connected Sheet 1 → Connection settings.

  2. Paste in your custom SQL query, for example:

SELECT
insertId AS InsertId,
jsonPayload.userId AS PlatformUserId,
jsonPayload.url AS Url,
JSON_VALUE(jsonPayload.headers, '$."X-Client-Region"[0]') AS Country,
jsonPayload.eventtime AS EventTime,
jsonPayload.eventname AS EventName,
jsonPayload.eventargs AS EventArgs,
jsonPayload.headers AS Headers,
jsonPayload.useranalyticsdata AS UserAnalyticsData,
jsonPayload.payload AS Payload
FROM `onboarding-online-prod.web_events_*`
WHERE jsonPayload.userId IS NOT NULL
ORDER BY EventTime DESC
LIMIT 10000

Save and refresh

After saving, Sheet1 updates automatically to show the new data based on your query.


Table Structure

When working with Onboarding.online analytics in BigQuery, the table structure is based on Kubernetes logs streamed via Google Log Router. This follows the standard GCP logging format.


3.1 Top-level Schema

Each record includes metadata, event details, and structured payloads. Below is the top-level schema:

Field name

Type

Mode

Description

logName

STRING

NULLABLE

Log name

resource

RECORD

NULLABLE

Metadata about the resource (K8s, project, etc.)

textPayload

STRING

NULLABLE

Plain log text (if not structured JSON)

jsonPayload

RECORD

NULLABLE

Main field containing structured event data

timestamp

TIMESTAMP

NULLABLE

Event timestamp

receiveTimestamp

TIMESTAMP

NULLABLE

Time when log was received

severity

STRING

NULLABLE

Severity level (INFO, ERROR, etc.)

insertId

STRING

NULLABLE

Unique record ID

httpRequest

RECORD

NULLABLE

HTTP request info

labels

RECORD

NULLABLE

Additional labels

operation

RECORD

NULLABLE

Operation metadata

trace

STRING

NULLABLE

Trace ID

spanId

STRING

NULLABLE

Span ID

traceSampled

BOOLEAN

NULLABLE

Trace sampling flag

sourceLocation

RECORD

NULLABLE

Source location in code

split

RECORD

NULLABLE

Split metadata

errorGroups

RECORD

REPEATED

Error data if available

apphub

RECORD

NULLABLE

AppHub metadata

apphubDestination

RECORD

NULLABLE

AppHub destination metadata

Note: All business and analytics data is stored inside jsonPayload.


3.2 jsonPayload Structure

The jsonPayload field contains the structured data relevant to user analytics and onboarding events.

Field name

Type

Description

userid

STRING

Unique user ID

url

STRING

URL where the event occurred

headers

STRING

HTTP headers (JSON as string)

eventtime

STRING

Event timestamp in ISO format

eventname

STRING

Event name (StartOnboarding, UserUpdatedValue, OnboardingFinished)

eventargs

STRING

Event arguments (JSON as string)

useranalyticsdata

STRING

Analytics data about the user (JSON: country, platform, etc.)

payload

STRING

Additional user data such as email and payment system IDs (JSON)


Example Payload

{ "email": "[email protected]", "stripeCustomerId": "cus_abc123", "paddleCustomerId": "123456" }

This structure ensures that all user actions, events, and metadata are captured in a standardized way, making it possible to run powerful queries directly in BigQuery.

Did this answer your question?