1 Star 0 Fork 20

东方飞鱼/Sourcegraph

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
sg.config.yaml 62.43 KB
一键复制 编辑 原始数据 按行查看 历史
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270127112721273127412751276127712781279128012811282128312841285128612871288128912901291129212931294129512961297129812991300130113021303130413051306130713081309131013111312131313141315131613171318131913201321132213231324132513261327132813291330133113321333133413351336133713381339134013411342134313441345134613471348134913501351135213531354135513561357135813591360136113621363136413651366136713681369137013711372137313741375137613771378137913801381138213831384138513861387138813891390139113921393139413951396139713981399140014011402140314041405140614071408140914101411141214131414141514161417141814191420142114221423142414251426142714281429143014311432143314341435143614371438143914401441144214431444144514461447144814491450145114521453145414551456145714581459146014611462146314641465146614671468146914701471147214731474147514761477147814791480148114821483148414851486148714881489149014911492149314941495149614971498149915001501150215031504150515061507150815091510151115121513151415151516151715181519152015211522152315241525152615271528152915301531153215331534153515361537153815391540154115421543154415451546154715481549155015511552155315541555155615571558155915601561156215631564156515661567156815691570157115721573157415751576157715781579158015811582158315841585158615871588158915901591159215931594159515961597159815991600160116021603160416051606160716081609161016111612161316141615161616171618161916201621162216231624162516261627162816291630163116321633163416351636163716381639164016411642164316441645164616471648164916501651165216531654165516561657165816591660166116621663166416651666166716681669167016711672167316741675167616771678167916801681168216831684168516861687168816891690169116921693169416951696169716981699170017011702170317041705170617071708170917101711171217131714171517161717171817191720172117221723172417251726172717281729173017311732173317341735173617371738173917401741174217431744174517461747174817491750175117521753175417551756175717581759176017611762176317641765176617671768176917701771177217731774177517761777177817791780178117821783178417851786178717881789179017911792179317941795179617971798179918001801180218031804180518061807180818091810181118121813181418151816181718181819182018211822182318241825182618271828182918301831183218331834183518361837183818391840184118421843184418451846184718481849185018511852185318541855185618571858185918601861186218631864186518661867186818691870187118721873187418751876187718781879188018811882188318841885188618871888188918901891189218931894189518961897189818991900190119021903190419051906190719081909191019111912191319141915191619171918191919201921192219231924192519261927192819291930193119321933193419351936193719381939194019411942
# Documentation for how to override sg configuration for local development:
# https://github.com/sourcegraph/sourcegraph/blob/main/doc/dev/background-information/sg/index.md#configuration
env:
GITSERVER_MEMORY_OBSERVATION_ENABLED: 'true'
PGPORT: 5432
PGHOST: localhost
PGUSER: sourcegraph
PGPASSWORD: sourcegraph
PGDATABASE: sourcegraph
PGSSLMODE: disable
SG_DEV_MIGRATE_ON_APPLICATION_STARTUP: 'true'
INSECURE_DEV: true
SRC_REPOS_DIR: $HOME/.sourcegraph/repos
SRC_LOG_LEVEL: info
SRC_LOG_FORMAT: condensed
SRC_TRACE_LOG: false
# Set this to true to show an iTerm link to the file:line where the log message came from
SRC_LOG_SOURCE_LINK: false
# Use two gitserver instances in local dev
SRC_GIT_SERVER_1: 127.0.0.1:3501
SRC_GIT_SERVER_2: 127.0.0.1:3502
SRC_GIT_SERVERS: 127.0.0.1:3501 127.0.0.1:3502
# Enable sharded indexed search mode:
INDEXED_SEARCH_SERVERS: localhost:3070 localhost:3071
GO111MODULE: 'on'
DEPLOY_TYPE: dev
SRC_HTTP_ADDR: ':3082'
# I don't think we even need to set these?
SEARCHER_URL: http://127.0.0.1:3181
REPO_UPDATER_URL: http://127.0.0.1:3182
REDIS_ENDPOINT: 127.0.0.1:6379
SYMBOLS_URL: http://localhost:3184
EMBEDDINGS_URL: http://localhost:9991
SRC_SYNTECT_SERVER: http://localhost:9238
SRC_FRONTEND_INTERNAL: localhost:3090
GRAFANA_SERVER_URL: http://localhost:3370
PROMETHEUS_URL: http://localhost:9090
JAEGER_SERVER_URL: http://localhost:16686
SRC_DEVELOPMENT: 'true'
SRC_PROF_HTTP: ''
SRC_PROF_SERVICES: |
[
{ "Name": "frontend", "Host": "127.0.0.1:6063" },
{ "Name": "gitserver-0", "Host": "127.0.0.1:3551" },
{ "Name": "gitserver-1", "Host": "127.0.0.1:3552" },
{ "Name": "searcher", "Host": "127.0.0.1:6069" },
{ "Name": "symbols", "Host": "127.0.0.1:6071" },
{ "Name": "repo-updater", "Host": "127.0.0.1:6074" },
{ "Name": "codeintel-worker", "Host": "127.0.0.1:6088" },
{ "Name": "worker", "Host": "127.0.0.1:6089" },
{ "Name": "worker-executors", "Host": "127.0.0.1:6996" },
{ "Name": "embeddings", "Host": "127.0.0.1:6099" },
{ "Name": "zoekt-index-0", "Host": "127.0.0.1:6072" },
{ "Name": "zoekt-index-1", "Host": "127.0.0.1:6073" },
{ "Name": "syntactic-code-intel-worker-0", "Host": "127.0.0.1:6075" },
{ "Name": "syntactic-code-intel-worker-1", "Host": "127.0.0.1:6076" },
{ "Name": "zoekt-web-0", "Host": "127.0.0.1:3070", "DefaultPath": "/debug/requests/" },
{ "Name": "zoekt-web-1", "Host": "127.0.0.1:3071", "DefaultPath": "/debug/requests/" }
]
# Settings/config
SITE_CONFIG_FILE: ./dev/site-config.json
SITE_CONFIG_ALLOW_EDITS: true
GLOBAL_SETTINGS_FILE: ./dev/global-settings.json
GLOBAL_SETTINGS_ALLOW_EDITS: true
# Point codeintel to the `frontend` database in development
CODEINTEL_PGPORT: $PGPORT
CODEINTEL_PGHOST: $PGHOST
CODEINTEL_PGUSER: $PGUSER
CODEINTEL_PGPASSWORD: $PGPASSWORD
CODEINTEL_PGDATABASE: $PGDATABASE
CODEINTEL_PGSSLMODE: $PGSSLMODE
CODEINTEL_PGDATASOURCE: $PGDATASOURCE
CODEINTEL_PG_ALLOW_SINGLE_DB: true
# Required for `frontend` and `web` commands
SOURCEGRAPH_HTTPS_DOMAIN: sourcegraph.test
SOURCEGRAPH_HTTPS_PORT: 3443
# Required for `web` commands
NODE_OPTIONS: '--max_old_space_size=8192'
# Default `NODE_ENV` to `development`
NODE_ENV: development
# Required for codeintel uploadstore
PRECISE_CODE_INTEL_UPLOAD_AWS_ENDPOINT: http://localhost:9000
PRECISE_CODE_INTEL_UPLOAD_BACKEND: blobstore
# Required for embeddings job upload
EMBEDDINGS_UPLOAD_AWS_ENDPOINT: http://localhost:9000
# Required for upload of search job results
SEARCH_JOBS_UPLOAD_AWS_ENDPOINT: http://localhost:9000
# Point code insights to the `frontend` database in development
CODEINSIGHTS_PGPORT: $PGPORT
CODEINSIGHTS_PGHOST: $PGHOST
CODEINSIGHTS_PGUSER: $PGUSER
CODEINSIGHTS_PGPASSWORD: $PGPASSWORD
CODEINSIGHTS_PGDATABASE: $PGDATABASE
CODEINSIGHTS_PGSSLMODE: $PGSSLMODE
CODEINSIGHTS_PGDATASOURCE: $PGDATASOURCE
# Disable code insights by default
DB_STARTUP_TIMEOUT: 120s # codeinsights-db needs more time to start in some instances.
DISABLE_CODE_INSIGHTS_HISTORICAL: true
DISABLE_CODE_INSIGHTS: true
# # OpenTelemetry in dev - use single http/json endpoint
# OTEL_EXPORTER_OTLP_ENDPOINT: http://127.0.0.1:4318
# OTEL_EXPORTER_OTLP_PROTOCOL: http/json
# Enable gRPC Web UI for debugging
GRPC_WEB_UI_ENABLED: 'true'
# Enable full protobuf message logging when an internal error occurred
SRC_GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_ENABLED: 'true'
SRC_GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_JSON_TRUNCATION_SIZE_BYTES: '1KB'
SRC_GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_HANDLING_MAX_MESSAGE_SIZE_BYTES: '100MB'
## zoekt-specific message logging
GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_ENABLED: 'true'
GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_JSON_TRUNCATION_SIZE_BYTES: '1KB'
GRPC_INTERNAL_ERROR_LOGGING_LOG_PROTOBUF_MESSAGES_HANDLING_MAX_MESSAGE_SIZE_BYTES: '100MB'
# Telemetry V2 export configuration. By default, this points to a test
# instance (go/msp-ops/telemetry-gateway#dev). Set the following:
#
# TELEMETRY_GATEWAY_EXPORTER_EXPORT_ADDR: 'http://127.0.0.1:6080'
#
# in 'sg.config.overwrite.yaml' to point to a locally running Telemetry
# Gateway instead (via 'sg run telemetry-gateway')
TELEMETRY_GATEWAY_EXPORTER_EXPORT_ADDR: "https://telemetry-gateway.sgdev.org:443"
SRC_TELEMETRY_EVENTS_EXPORT_ALL: 'true'
# By default, allow temporary edits to external services.
EXTSVC_CONFIG_ALLOW_EDITS: true
commands:
server:
description: Run an all-in-one sourcegraph/server image
cmd: ./dev/run-server-image.sh
env:
TAG: insiders
CLEAN: 'true'
DATA: '/tmp/sourcegraph-data'
URL: 'http://localhost:7080'
frontend:
description: Frontend
cmd: |
# TODO: This should be fixed
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
# If EXTSVC_CONFIG_FILE is *unset*, set a default.
export EXTSVC_CONFIG_FILE=${EXTSVC_CONFIG_FILE-'../dev-private/enterprise/dev/external-services-config.json'}
.bin/frontend
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/frontend github.com/sourcegraph/sourcegraph/cmd/frontend
checkBinary: .bin/frontend
env:
CONFIGURATION_MODE: server
USE_ENHANCED_LANGUAGE_DETECTION: false
SITE_CONFIG_FILE: '../dev-private/enterprise/dev/site-config.json'
SITE_CONFIG_ESCAPE_HATCH_PATH: '$HOME/.sourcegraph/site-config.json'
# frontend processes need this to be so that the paths to the assets are rendered correctly
WEB_BUILDER_DEV_SERVER: 1
watch:
- lib
- internal
- cmd/frontend
gitserver-template: &gitserver_template
cmd: .bin/gitserver
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/gitserver github.com/sourcegraph/sourcegraph/cmd/gitserver
checkBinary: .bin/gitserver
env:
HOSTNAME: 127.0.0.1:3178
watch:
- lib
- internal
- cmd/gitserver
# This is only here to stay backwards-compatible with people's custom
# `sg.config.overwrite.yaml` files
gitserver:
<<: *gitserver_template
gitserver-0:
<<: *gitserver_template
env:
GITSERVER_EXTERNAL_ADDR: 127.0.0.1:3501
GITSERVER_ADDR: 127.0.0.1:3501
SRC_REPOS_DIR: $HOME/.sourcegraph/repos_1
SRC_PROF_HTTP: 127.0.0.1:3551
gitserver-1:
<<: *gitserver_template
env:
GITSERVER_EXTERNAL_ADDR: 127.0.0.1:3502
GITSERVER_ADDR: 127.0.0.1:3502
SRC_REPOS_DIR: $HOME/.sourcegraph/repos_2
SRC_PROF_HTTP: 127.0.0.1:3552
repo-updater:
cmd: |
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
.bin/repo-updater
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/repo-updater github.com/sourcegraph/sourcegraph/cmd/repo-updater
checkBinary: .bin/repo-updater
watch:
- lib
- internal
- cmd/repo-updater
symbols:
cmd: .bin/symbols
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
# Ensure scip-ctags-dev is installed to avoid prompting the user to
# install it manually.
if [ ! -f $(./dev/scip-ctags-install.sh which) ]; then
./dev/scip-ctags-install.sh
fi
go build -gcflags="$GCFLAGS" -o .bin/symbols github.com/sourcegraph/sourcegraph/cmd/symbols
checkBinary: .bin/symbols
env:
CTAGS_COMMAND: dev/universal-ctags-dev
SCIP_CTAGS_COMMAND: dev/scip-ctags-dev
CTAGS_PROCESSES: 2
USE_ROCKSKIP: 'false'
watch:
- lib
- internal
- cmd/symbols
- internal/rockskip
embeddings:
cmd: |
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
.bin/embeddings
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/embeddings github.com/sourcegraph/sourcegraph/cmd/embeddings
checkBinary: .bin/embeddings
watch:
- lib
- internal
- cmd/embeddings
- internal/embeddings
worker:
cmd: |
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
.bin/worker
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/worker github.com/sourcegraph/sourcegraph/cmd/worker
checkBinary: .bin/worker
watch:
- lib
- internal
- cmd/worker
cody-gateway:
cmd: |
.bin/cody-gateway
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/cody-gateway github.com/sourcegraph/sourcegraph/cmd/cody-gateway
checkBinary: .bin/cody-gateway
env:
SRC_LOG_LEVEL: info
# Enables metrics in dev via debugserver
SRC_PROF_HTTP: '127.0.0.1:6098'
# Set in 'sg.config.overwrite.yaml' if you want to test local Cody Gateway:
# https://docs-legacy.sourcegraph.com/dev/how-to/cody_gateway
CODY_GATEWAY_DOTCOM_ACCESS_TOKEN: ''
CODY_GATEWAY_DOTCOM_API_URL: https://sourcegraph.test:3443/.api/graphql
CODY_GATEWAY_ALLOW_ANONYMOUS: true
CODY_GATEWAY_DIAGNOSTICS_SECRET: sekret
# Set in 'sg.config.overwrite.yaml' if you want to test upstream
# integrations from local Cody Gateway:
# Entitle: https://app.entitle.io/request?data=eyJkdXJhdGlvbiI6IjIxNjAwIiwianVzdGlmaWNhdGlvbiI6IldSSVRFIEpVU1RJRklDQVRJT04gSEVSRSIsInJvbGVJZHMiOlt7ImlkIjoiYjhmYTk2NzgtNDExZC00ZmU1LWE2NDYtMzY4Y2YzYzUwYjJlIiwidGhyb3VnaCI6ImI4ZmE5Njc4LTQxMWQtNGZlNS1hNjQ2LTM2OGNmM2M1MGIyZSIsInR5cGUiOiJyb2xlIn1dfQ%3D%3D
# GSM: https://console.cloud.google.com/security/secret-manager?project=cody-gateway-dev
CODY_GATEWAY_ANTHROPIC_ACCESS_TOKEN: sekret
CODY_GATEWAY_OPENAI_ACCESS_TOKEN: sekret
CODY_GATEWAY_FIREWORKS_ACCESS_TOKEN: sekret
CODY_GATEWAY_SOURCEGRAPH_EMBEDDINGS_API_TOKEN: sekret
CODY_GATEWAY_GOOGLE_ACCESS_TOKEN: sekret
# Connect to services that require SAMS M2M http://go/sams-m2m
SAMS_URL: https://accounts.sgdev.org
# Connect to Enterprise Portal running locally
CODY_GATEWAY_ENTERPRISE_PORTAL_URL: http://localhost:6081
externalSecrets:
SAMS_CLIENT_ID:
project: sourcegraph-local-dev
name: SG_LOCAL_DEV_SAMS_CLIENT_ID
SAMS_CLIENT_SECRET:
project: sourcegraph-local-dev
name: SG_LOCAL_DEV_SAMS_CLIENT_SECRET
watch:
- lib
- internal
- cmd/cody-gateway
telemetry-gateway:
cmd: |
# Telemetry Gateway needs this to parse and validate incoming license keys.
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
.bin/telemetry-gateway
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/telemetry-gateway github.com/sourcegraph/sourcegraph/cmd/telemetry-gateway
checkBinary: .bin/telemetry-gateway
env:
PORT: '6080'
DIAGNOSTICS_SECRET: sekret
TELEMETRY_GATEWAY_EVENTS_PUBSUB_ENABLED: false
SRC_LOG_LEVEL: info
GRPC_WEB_UI_ENABLED: true
# Set for convenience - use real values in sg.config.overwrite.yaml if you
# are interacting with RPCs that enforce SAMS M2M auth. See
# https://github.com/sourcegraph/accounts.sourcegraph.com/wiki/Operators-Cheat-Sheet#create-a-new-idp-client
TELEMETRY_GATEWAY_SAMS_CLIENT_ID: 'foo'
TELEMETRY_GATEWAY_SAMS_CLIENT_SECRET: 'bar'
watch:
- lib
- internal
- cmd/telemetry-gateway
- internal/telemetrygateway
pings:
cmd: |
.bin/pings
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/pings github.com/sourcegraph/sourcegraph/cmd/pings
checkBinary: .bin/pings
env:
PORT: '6080'
SRC_LOG_LEVEL: info
DIAGNOSTICS_SECRET: 'lifeisgood'
PINGS_PUBSUB_PROJECT_ID: 'telligentsourcegraph'
PINGS_PUBSUB_TOPIC_ID: 'server-update-checks-test'
HUBSPOT_ACCESS_TOKEN: ''
# Enables metrics in dev via debugserver
SRC_PROF_HTTP: '127.0.0.1:7011'
watch:
- lib
- internal
- cmd/pings
msp-example:
cmd: .bin/msp-example
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/msp-example github.com/sourcegraph/sourcegraph/cmd/msp-example
checkBinary: .bin/msp-example
env:
PORT: '9080'
DIAGNOSTICS_SECRET: sekret
SRC_LOG_LEVEL: debug
STATELESS_MODE: 'true'
watch:
- cmd/msp-example
- lib/managedservicesplatform
enterprise-portal:
cmd: |
export PGDSN="postgres://$PGUSER:$PGPASSWORD@$PGHOST:$PGPORT/{{ .Database }}?sslmode=$PGSSLMODE"
# Connect to local development database, with the assumption that it will
# have dotcom database tables.
export DOTCOM_PGDSN_OVERRIDE="postgres://$PGUSER:$PGPASSWORD@$PGHOST:$PGPORT/$PGDATABASE?sslmode=$PGSSLMODE"
.bin/enterprise-portal
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/enterprise-portal github.com/sourcegraph/sourcegraph/cmd/enterprise-portal
# Ensure the "msp_iam" database exists (PostgreSQL has no "IF NOT EXISTS" option).
createdb -h $PGHOST -p $PGPORT -U $PGUSER msp_iam || true
checkBinary: .bin/enterprise-portal
env:
PORT: '6081'
DIAGNOSTICS_SECRET: sekret
SRC_LOG_LEVEL: debug
GRPC_WEB_UI_ENABLED: 'true'
# Connects to local database, so include all licenses from local DB
DOTCOM_INCLUDE_PRODUCTION_LICENSES: 'true'
# Used for authentication
SAMS_URL: https://accounts.sgdev.org
REDIS_HOST: localhost
REDIS_PORT: 6379
externalSecrets:
ENTERPRISE_PORTAL_SAMS_CLIENT_ID:
project: sourcegraph-local-dev
name: SG_LOCAL_DEV_SAMS_CLIENT_ID
ENTERPRISE_PORTAL_SAMS_CLIENT_SECRET:
project: sourcegraph-local-dev
name: SG_LOCAL_DEV_SAMS_CLIENT_SECRET
watch:
- lib
- cmd/enterprise-portal
searcher:
cmd: .bin/searcher
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/searcher github.com/sourcegraph/sourcegraph/cmd/searcher
checkBinary: .bin/searcher
watch:
- lib
- internal
- cmd/searcher
caddy:
ignoreStdout: true
ignoreStderr: true
cmd: .bin/caddy_${CADDY_VERSION} run --watch --config=dev/Caddyfile
install_func: installCaddy
env:
CADDY_VERSION: 2.7.3
web:
description: Enterprise version of the web app
cmd: pnpm --filter @sourcegraph/web dev
install: |
pnpm install
pnpm run generate
env:
ENABLE_OPEN_TELEMETRY: true
# Needed so that node can ping the caddy server
NODE_TLS_REJECT_UNAUTHORIZED: 0
web-sveltekit:
description: Enterprise version of the web sveltekit app
cmd: pnpm --filter @sourcegraph/web-sveltekit dev:enterprise
install: |
pnpm install
web-standalone-http:
description: Standalone web frontend (dev) with API proxy to a configurable URL
cmd: pnpm --filter @sourcegraph/web serve:dev --color
install: |
pnpm install
pnpm run generate
env:
WEB_BUILDER_SERVE_INDEX: true
SOURCEGRAPH_API_URL: https://sourcegraph.sourcegraph.com
web-integration-build:
description: Build development web application for integration tests
cmd: pnpm --filter @sourcegraph/web run build
env:
INTEGRATION_TESTS: true
web-integration-build-prod:
description: Build production web application for integration tests
cmd: pnpm --filter @sourcegraph/web run build
env:
INTEGRATION_TESTS: true
NODE_ENV: production
web-sveltekit-standalone:
description: Standalone SvelteKit web frontend (dev) with API proxy to a configurable URL
cmd: pnpm --filter @sourcegraph/web-sveltekit run dev
install: |
pnpm install
pnpm generate
web-sveltekit-prod-watch:
description: Builds the prod version of the SvelteKit web app and rebuilds on changes
cmd: pnpm --filter @sourcegraph/web-sveltekit run build --watch
install: |
pnpm install
pnpm generate
docsite:
description: Docsite instance serving the docs
env:
RUN_SCRIPT_NAME: .bin/bazel_run_docsite.sh
cmd: |
# We tell bazel to write out a script to run docsite and run that script via sg otherwise
# when we get a SIGINT ... bazel gets killed but docsite doesn't get killed properly. So we use --script_path
# which tells bazel to write out a script to run docsite, and let sg run that script rather, which means
# any signal gets propagated and docsite gets properly terminated.
#
# We also specifically put this in .bin, since that directory is gitignored, otherwise the run script is left
# around and currently there is no clean way to remove it - even using a bash trap doesn't work, since the trap
# never gets executed due to sg running the script.
bazel run --script_path=${RUN_SCRIPT_NAME} --noshow_progress --noshow_loading_progress //doc:serve
./${RUN_SCRIPT_NAME}
syntax-highlighter:
ignoreStdout: true
ignoreStderr: true
cmd: |
docker run --name=syntax-highlighter --rm -p9238:9238 \
-e WORKERS=1 -e ROCKET_ADDRESS=0.0.0.0 \
sourcegraph/syntax-highlighter:insiders
install: |
# Remove containers by the old name, too.
docker inspect syntect_server >/dev/null 2>&1 && docker rm -f syntect_server || true
docker inspect syntax-highlighter >/dev/null 2>&1 && docker rm -f syntax-highlighter || true
# Pull syntax-highlighter latest insider image, only during install, but
# skip if OFFLINE=true is set.
if [[ "$OFFLINE" != "true" ]]; then
docker pull -q sourcegraph/syntax-highlighter:insiders
fi
zoekt-indexserver-template: &zoekt_indexserver_template
cmd: |
env PATH="${PWD}/.bin:$PATH" .bin/zoekt-sourcegraph-indexserver \
-sourcegraph_url 'http://localhost:3090' \
-index "$HOME/.sourcegraph/zoekt/index-$ZOEKT_NUM" \
-hostname "localhost:$ZOEKT_HOSTNAME_PORT" \
-interval 1m \
-listen "127.0.0.1:$ZOEKT_LISTEN_PORT" \
-cpu_fraction 0.25
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
mkdir -p .bin
export GOBIN="${PWD}/.bin"
go install -gcflags="$GCFLAGS" github.com/sourcegraph/zoekt/cmd/zoekt-archive-index
go install -gcflags="$GCFLAGS" github.com/sourcegraph/zoekt/cmd/zoekt-git-index
go install -gcflags="$GCFLAGS" github.com/sourcegraph/zoekt/cmd/zoekt-sourcegraph-indexserver
checkBinary: .bin/zoekt-sourcegraph-indexserver
env: &zoektenv
CTAGS_COMMAND: dev/universal-ctags-dev
SCIP_CTAGS_COMMAND: dev/scip-ctags-dev
GRPC_ENABLED: true
zoekt-index-0:
<<: *zoekt_indexserver_template
env:
<<: *zoektenv
ZOEKT_NUM: 0
ZOEKT_HOSTNAME_PORT: 3070
ZOEKT_LISTEN_PORT: 6072
zoekt-index-1:
<<: *zoekt_indexserver_template
env:
<<: *zoektenv
ZOEKT_NUM: 1
ZOEKT_HOSTNAME_PORT: 3071
ZOEKT_LISTEN_PORT: 6073
zoekt-web-template: &zoekt_webserver_template
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
mkdir -p .bin
env GOBIN="${PWD}/.bin" go install -gcflags="$GCFLAGS" github.com/sourcegraph/zoekt/cmd/zoekt-webserver
checkBinary: .bin/zoekt-webserver
env:
JAEGER_DISABLED: true
OPENTELEMETRY_DISABLED: false
GOGC: 25
zoekt-web-0:
<<: *zoekt_webserver_template
cmd: env PATH="${PWD}/.bin:$PATH" .bin/zoekt-webserver -index "$HOME/.sourcegraph/zoekt/index-0" -pprof -rpc -indexserver_proxy -listen "127.0.0.1:3070"
zoekt-web-1:
<<: *zoekt_webserver_template
cmd: env PATH="${PWD}/.bin:$PATH" .bin/zoekt-webserver -index "$HOME/.sourcegraph/zoekt/index-1" -pprof -rpc -indexserver_proxy -listen "127.0.0.1:3071"
codeintel-worker:
cmd: |
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
.bin/codeintel-worker
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/codeintel-worker github.com/sourcegraph/sourcegraph/cmd/precise-code-intel-worker
checkBinary: .bin/codeintel-worker
watch:
- lib
- internal
- cmd/precise-code-intel-worker
- lib/codeintel
syntactic-codeintel-worker-template: &syntactic_codeintel_worker_template
cmd: |
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
.bin/syntactic-code-intel-worker
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
if [ ! -f $(./dev/scip-syntax-install.sh which) ]; then
echo "Building scip-syntax"
./dev/scip-syntax-install.sh
fi
echo "Building codeintel-outkline-scip-worker"
go build -gcflags="$GCFLAGS" -o .bin/syntactic-code-intel-worker github.com/sourcegraph/sourcegraph/cmd/syntactic-code-intel-worker
checkBinary: .bin/syntactic-code-intel-worker
watch:
- lib
- internal
- cmd/syntactic-code-intel-worker
- lib/codeintel
env:
SCIP_SYNTAX_PATH: dev/scip-syntax-dev
syntactic-code-intel-worker-0:
<<: *syntactic_codeintel_worker_template
env:
SYNTACTIC_CODE_INTEL_WORKER_ADDR: 127.0.0.1:6075
syntactic-code-intel-worker-1:
<<: *syntactic_codeintel_worker_template
cmd: |
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
.bin/syntactic-code-intel-worker
env:
SYNTACTIC_CODE_INTEL_WORKER_ADDR: 127.0.0.1:6076
executor-template:
&executor_template # TMPDIR is set here so it's not set in the `install` process, which would trip up `go build`.
cmd: |
env TMPDIR="$HOME/.sourcegraph/executor-temp" .bin/executor
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/executor github.com/sourcegraph/sourcegraph/cmd/executor
checkBinary: .bin/executor
env:
# Required for frontend and executor to communicate
EXECUTOR_FRONTEND_URL: http://localhost:3080
# Must match the secret defined in the site config.
EXECUTOR_FRONTEND_PASSWORD: hunter2hunter2hunter2
# Disable firecracker inside executor in dev
EXECUTOR_USE_FIRECRACKER: false
EXECUTOR_QUEUE_NAME: TEMPLATE
watch:
- lib
- internal
- cmd/executor
executor-kubernetes-template: &executor_kubernetes_template
cmd: |
cd $MANIFEST_PATH
cleanup() {
kubectl delete jobs --all
kubectl delete -f .
}
kubectl delete -f . --ignore-not-found
kubectl apply -f .
trap cleanup EXIT SIGINT
while true; do
sleep 1
done
install: |
bazel run //cmd/executor-kubernetes:image_tarball
env:
IMAGE: executor-kubernetes:candidate
# TODO: This is required but should only be set on M1 Macs.
PLATFORM: linux/arm64
watch:
- lib
- internal
- cmd/executor
codeintel-executor:
<<: *executor_template
cmd: |
env TMPDIR="$HOME/.sourcegraph/indexer-temp" .bin/executor
env:
EXECUTOR_QUEUE_NAME: codeintel
# If you want to use this, either start it with `sg run batches-executor-firecracker` or
# modify the `commandsets.batches` in your local `sg.config.overwrite.yaml`
codeintel-executor-firecracker:
<<: *executor_template
cmd: |
env TMPDIR="$HOME/.sourcegraph/codeintel-executor-temp" \
sudo --preserve-env=TMPDIR,EXECUTOR_QUEUE_NAME,EXECUTOR_FRONTEND_URL,EXECUTOR_FRONTEND_PASSWORD,EXECUTOR_USE_FIRECRACKER \
.bin/executor
env:
EXECUTOR_USE_FIRECRACKER: true
EXECUTOR_QUEUE_NAME: codeintel
codeintel-executor-kubernetes:
<<: *executor_kubernetes_template
env:
MANIFEST_PATH: ./cmd/executor/kubernetes/codeintel
batches-executor:
<<: *executor_template
cmd: |
env TMPDIR="$HOME/.sourcegraph/batches-executor-temp" .bin/executor
env:
EXECUTOR_QUEUE_NAME: batches
EXECUTOR_MAXIMUM_NUM_JOBS: 8
# If you want to use this, either start it with `sg run batches-executor-firecracker` or
# modify the `commandsets.batches` in your local `sg.config.overwrite.yaml`
batches-executor-firecracker:
<<: *executor_template
cmd: |
env TMPDIR="$HOME/.sourcegraph/batches-executor-temp" \
sudo --preserve-env=TMPDIR,EXECUTOR_QUEUE_NAME,EXECUTOR_FRONTEND_URL,EXECUTOR_FRONTEND_PASSWORD,EXECUTOR_USE_FIRECRACKER \
.bin/executor
env:
EXECUTOR_USE_FIRECRACKER: true
EXECUTOR_QUEUE_NAME: batches
batches-executor-kubernetes:
<<: *executor_kubernetes_template
env:
MANIFEST_PATH: ./cmd/executor/kubernetes/batches
# This tool rebuilds the batcheshelper image every time the source of it is changed.
batcheshelper-builder:
# Nothing to run for this, we just want to re-run the install script every time.
cmd: exit 0
install: |
bazel build //cmd/batcheshelper:image_tarball
docker load --input $(bazel cquery //cmd/batcheshelper:image_tarball --output=files)
env:
IMAGE: batcheshelper:candidate
# TODO: This is required but should only be set on M1 Macs.
PLATFORM: linux/arm64
watch:
- cmd/batcheshelper
- lib/batches
continueWatchOnExit: true
multiqueue-executor:
<<: *executor_template
cmd: |
env TMPDIR="$HOME/.sourcegraph/multiqueue-executor-temp" .bin/executor
env:
EXECUTOR_QUEUE_NAME: ''
EXECUTOR_QUEUE_NAMES: 'codeintel,batches'
EXECUTOR_MAXIMUM_NUM_JOBS: 8
blobstore:
cmd: .bin/blobstore
install: |
# Ensure the old blobstore Docker container is not running
docker rm -f blobstore
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -gcflags="$GCFLAGS" -o .bin/blobstore github.com/sourcegraph/sourcegraph/cmd/blobstore
checkBinary: .bin/blobstore
watch:
- lib
- internal
- cmd/blobstore
env:
BLOBSTORE_DATA_DIR: $HOME/.sourcegraph-dev/data/blobstore-go
redis-postgres:
# Add the following overwrites to your sg.config.overwrite.yaml to use the docker-compose
# database:
#
# env:
# PGHOST: localhost
# PGPASSWORD: sourcegraph
# PGUSER: sourcegraph
#
# You could also add an overwrite to add `redis-postgres` to the relevant command set(s).
description: Dockerized version of redis and postgres
cmd: docker-compose -f dev/redis-postgres.yml up $COMPOSE_ARGS
env:
COMPOSE_ARGS: --force-recreate
jaeger:
cmd: |
echo "Jaeger will be available on http://localhost:16686/-/debug/jaeger/search"
.bin/jaeger-all-in-one-${JAEGER_VERSION} --log-level ${JAEGER_LOG_LEVEL}
install_func: installJaeger
env:
JAEGER_VERSION: 1.45.0
JAEGER_DISK: $HOME/.sourcegraph-dev/data/jaeger
JAEGER_LOG_LEVEL: error
QUERY_BASE_PATH: /-/debug/jaeger
grafana:
cmd: |
if [[ $(uname) == "Linux" ]]; then
# Linux needs an extra arg to support host.internal.docker, which is how grafana connects
# to the prometheus backend.
ADD_HOST_FLAG="--add-host=host.docker.internal:host-gateway"
# Docker users on Linux will generally be using direct user mapping, which
# means that they'll want the data in the volume mount to be owned by the
# same user as is running this script. Fortunately, the Grafana container
# doesn't really care what user it runs as, so long as it can write to
# /var/lib/grafana.
DOCKER_USER="--user=$UID"
fi
echo "Grafana: serving on http://localhost:${PORT}"
echo "Grafana: note that logs are piped to ${GRAFANA_LOG_FILE}"
docker run --rm ${DOCKER_USER} \
--name=${CONTAINER} \
--cpus=1 \
--memory=1g \
-p 0.0.0.0:3370:3370 ${ADD_HOST_FLAG} \
-v "${GRAFANA_DISK}":/var/lib/grafana \
-v "$(pwd)"/dev/grafana/all:/sg_config_grafana/provisioning/datasources \
grafana:candidate >"${GRAFANA_LOG_FILE}" 2>&1
install: |
mkdir -p "${GRAFANA_DISK}"
mkdir -p "$(dirname ${GRAFANA_LOG_FILE})"
docker inspect $CONTAINER >/dev/null 2>&1 && docker rm -f $CONTAINER
bazel build //docker-images/grafana:image_tarball
docker load --input $(bazel cquery //docker-images/grafana:image_tarball --output=files)
env:
GRAFANA_DISK: $HOME/.sourcegraph-dev/data/grafana
# Log file location: since we log outside of the Docker container, we should
# log somewhere that's _not_ ~/.sourcegraph-dev/data/grafana, since that gets
# volume mounted into the container and therefore has its own ownership
# semantics.
# Now for the actual logging. Grafana's output gets sent to stdout and stderr.
# We want to capture that output, but because it's fairly noisy, don't want to
# display it in the normal case.
GRAFANA_LOG_FILE: $HOME/.sourcegraph-dev/logs/grafana/grafana.log
IMAGE: grafana:candidate
CONTAINER: grafana
PORT: 3370
# docker containers must access things via docker host on non-linux platforms
DOCKER_USER: ''
ADD_HOST_FLAG: ''
CACHE: false
prometheus:
cmd: |
if [[ $(uname) == "Linux" ]]; then
DOCKER_USER="--user=$UID"
# Frontend generally runs outside of Docker, so to access it we need to be
# able to access ports on the host. --net=host is a very dirty way of
# enabling this.
DOCKER_NET="--net=host"
SRC_FRONTEND_INTERNAL="localhost:3090"
fi
echo "Prometheus: serving on http://localhost:${PORT}"
echo "Prometheus: note that logs are piped to ${PROMETHEUS_LOG_FILE}"
docker run --rm ${DOCKER_NET} ${DOCKER_USER} \
--name=${CONTAINER} \
--cpus=1 \
--memory=4g \
-p 0.0.0.0:9090:9090 \
-v "${PROMETHEUS_DISK}":/prometheus \
-v "$(pwd)/${CONFIG_DIR}":/sg_prometheus_add_ons \
-e SRC_FRONTEND_INTERNAL="${SRC_FRONTEND_INTERNAL}" \
-e DISABLE_SOURCEGRAPH_CONFIG="${DISABLE_SOURCEGRAPH_CONFIG:-""}" \
-e DISABLE_ALERTMANAGER="${DISABLE_ALERTMANAGER:-""}" \
-e PROMETHEUS_ADDITIONAL_FLAGS="--web.enable-lifecycle --web.enable-admin-api" \
${IMAGE} >"${PROMETHEUS_LOG_FILE}" 2>&1
install: |
mkdir -p "${PROMETHEUS_DISK}"
mkdir -p "$(dirname ${PROMETHEUS_LOG_FILE})"
docker inspect $CONTAINER >/dev/null 2>&1 && docker rm -f $CONTAINER
if [[ $(uname) == "Linux" ]]; then
PROM_TARGETS="dev/prometheus/linux/prometheus_targets.yml"
fi
cp ${PROM_TARGETS} "${CONFIG_DIR}"/prometheus_targets.yml
bazel build //docker-images/prometheus:image_tarball
docker load --input $(bazel cquery //docker-images/prometheus:image_tarball --output=files)
env:
PROMETHEUS_DISK: $HOME/.sourcegraph-dev/data/prometheus
# See comment above for `grafana`
PROMETHEUS_LOG_FILE: $HOME/.sourcegraph-dev/logs/prometheus/prometheus.log
IMAGE: prometheus:candidate
CONTAINER: prometheus
PORT: 9090
CONFIG_DIR: docker-images/prometheus/config
DOCKER_USER: ''
DOCKER_NET: ''
PROM_TARGETS: dev/prometheus/all/prometheus_targets.yml
SRC_FRONTEND_INTERNAL: host.docker.internal:3090
ADD_HOST_FLAG: ''
DISABLE_SOURCEGRAPH_CONFIG: false
postgres_exporter:
cmd: |
if [[ $(uname) == "Linux" ]]; then
# Linux needs an extra arg to support host.internal.docker, which is how grafana connects
# to the prometheus backend.
ADD_HOST_FLAG="--add-host=host.docker.internal:host-gateway"
fi
# Use psql to read the effective values for PG* env vars (instead of, e.g., hardcoding the default
# values).
get_pg_env() { psql -c '\set' | grep "$1" | cut -f 2 -d "'"; }
PGHOST=${PGHOST-$(get_pg_env HOST)}
PGUSER=${PGUSER-$(get_pg_env USER)}
PGPORT=${PGPORT-$(get_pg_env PORT)}
# we need to be able to query migration_logs table
PGDATABASE=${PGDATABASE-$(get_pg_env DBNAME)}
ADJUSTED_HOST=${PGHOST:-127.0.0.1}
if [[ ("$ADJUSTED_HOST" == "localhost" || "$ADJUSTED_HOST" == "127.0.0.1" || -f "$ADJUSTED_HOST") && "$OSTYPE" != "linux-gnu" ]]; then
ADJUSTED_HOST="host.docker.internal"
fi
NET_ARG=""
DATA_SOURCE_NAME="postgresql://${PGUSER}:${PGPASSWORD}@${ADJUSTED_HOST}:${PGPORT}/${PGDATABASE}?sslmode=${PGSSLMODE:-disable}"
if [[ "$OSTYPE" == "linux-gnu" ]]; then
NET_ARG="--net=host"
DATA_SOURCE_NAME="postgresql://${PGUSER}:${PGPASSWORD}@${ADJUSTED_HOST}:${PGPORT}/${PGDATABASE}?sslmode=${PGSSLMODE:-disable}"
fi
echo "postgres_exporter: serving on http://localhost:${PORT}"
docker run --rm ${DOCKER_USER} \
--name=${CONTAINER} \
-e DATA_SOURCE_NAME="${DATA_SOURCE_NAME}" \
--cpus=1 \
--memory=1g \
-p 0.0.0.0:9187:9187 ${ADD_HOST_FLAG} \
"${IMAGE}"
install: |
docker inspect $CONTAINER >/dev/null 2>&1 && docker rm -f $CONTAINER
bazel build //docker-images/postgres_exporter:image_tarball
docker load --input $(bazel cquery //docker-images/postgres_exporter:image_tarball --output=files)
env:
IMAGE: postgres-exporter:candidate
CONTAINER: postgres_exporter
# docker containers must access things via docker host on non-linux platforms
DOCKER_USER: ''
ADD_HOST_FLAG: ''
monitoring-generator:
cmd: echo "monitoring-generator is deprecated, please run 'sg generate go' or 'bazel run //dev:write_all_generated' instead"
env:
otel-collector:
install: |
bazel build //docker-images/opentelemetry-collector:image_tarball
docker load --input $(bazel cquery //docker-images/opentelemetry-collector:image_tarball --output=files)
description: OpenTelemetry collector
cmd: |
JAEGER_HOST='host.docker.internal'
if [[ $(uname) == "Linux" ]]; then
# Jaeger generally runs outside of Docker, so to access it we need to be
# able to access ports on the host, because the Docker host only exists on
# MacOS. --net=host is a very dirty way of enabling this.
DOCKER_NET="--net=host"
JAEGER_HOST="localhost"
fi
docker container rm -f otel-collector
docker run --rm --name=otel-collector $DOCKER_NET $DOCKER_ARGS \
-p 4317:4317 -p 4318:4318 -p 55679:55679 -p 55670:55670 \
-p 8888:8888 \
-e JAEGER_HOST=$JAEGER_HOST \
-e HONEYCOMB_API_KEY=$HONEYCOMB_API_KEY \
-e HONEYCOMB_DATASET=$HONEYCOMB_DATASET \
$IMAGE --config "/etc/otel-collector/$CONFIGURATION_FILE"
env:
IMAGE: opentelemetry-collector:candidate
# Overwrite the following in sg.config.overwrite.yaml, based on which collector
# config you are using - see docker-images/opentelemetry-collector for more details.
CONFIGURATION_FILE: 'configs/jaeger.yaml'
# HONEYCOMB_API_KEY: ''
# HONEYCOMB_DATASET: ''
storybook:
cmd: pnpm storybook
install: pnpm install
# This will execute `env`, a utility to print the process environment. Can
# be used to debug which global vars `sg` uses.
debug-env:
description: Debug env vars
cmd: env
bext:
cmd: pnpm --filter @sourcegraph/browser dev
install: pnpm install
sourcegraph:
description: Single-program distribution (dev only)
cmd: |
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
.bin/sourcegraph
install: |
if [ -n "$DELVE" ]; then
export GCFLAGS='all=-N -l'
fi
go build -buildvcs=false -gcflags="$GCFLAGS" -o .bin/sourcegraph github.com/sourcegraph/sourcegraph/cmd/sourcegraph
checkBinary: .bin/sourcegraph
env:
CONFIGURATION_MODE: server
SITE_CONFIG_FILE: '../dev-private/enterprise/dev/site-config.json'
SITE_CONFIG_ESCAPE_HATCH_PATH: '$HOME/.sourcegraph/site-config.json'
EXTSVC_CONFIG_FILE: ../dev-private/enterprise/dev/external-services-config.json
WEB_BUILDER_DEV_SERVER: 1
INDEXED_SEARCH_SERVERS:
GITSERVER_ADDR: 127.0.0.1:3178
GITSERVER_EXTERNAL_ADDR: 127.0.0.1:3178
SRC_GIT_SERVERS: 127.0.0.1:3178
SRC_DISABLE_OOBMIGRATION_VALIDATION: 1
watch:
- cmd
- internal
- lib
- schema
bazelCommands:
blobstore:
target: //cmd/blobstore
env:
BLOBSTORE_DATA_DIR: $HOME/.sourcegraph-dev/data/blobstore-go
cody-gateway:
target: //cmd/cody-gateway
env:
SRC_LOG_LEVEL: info
# Enables metrics in dev via debugserver
SRC_PROF_HTTP: '127.0.0.1:6098'
# Set in override if you want to test local Cody Gateway: https://docs-legacy.sourcegraph.com/dev/how-to/cody_gateway
CODY_GATEWAY_DOTCOM_ACCESS_TOKEN: ''
CODY_GATEWAY_DOTCOM_API_URL: https://sourcegraph.test:3443/.api/graphql
CODY_GATEWAY_ALLOW_ANONYMOUS: true
CODY_GATEWAY_DIAGNOSTICS_SECRET: sekret
# Set in 'sg.config.overwrite.yaml' if you want to test upstream
# integrations from local Cody Gateway:
# Entitle: https://app.entitle.io/request?data=eyJkdXJhdGlvbiI6IjIxNjAwIiwianVzdGlmaWNhdGlvbiI6IldSSVRFIEpVU1RJRklDQVRJT04gSEVSRSIsInJvbGVJZHMiOlt7ImlkIjoiYjhmYTk2NzgtNDExZC00ZmU1LWE2NDYtMzY4Y2YzYzUwYjJlIiwidGhyb3VnaCI6ImI4ZmE5Njc4LTQxMWQtNGZlNS1hNjQ2LTM2OGNmM2M1MGIyZSIsInR5cGUiOiJyb2xlIn1dfQ%3D%3D
# GSM: https://console.cloud.google.com/security/secret-manager?project=cody-gateway-dev
CODY_GATEWAY_ANTHROPIC_ACCESS_TOKEN: sekret
CODY_GATEWAY_OPENAI_ACCESS_TOKEN: sekret
CODY_GATEWAY_FIREWORKS_ACCESS_TOKEN: sekret
CODY_GATEWAY_SOURCEGRAPH_EMBEDDINGS_API_TOKEN: sekret
CODY_GATEWAY_GOOGLE_ACCESS_TOKEN: sekret
# Connect to services that require SAMS M2M http://go/sams-m2m
SAMS_URL: https://accounts.sgdev.org
# Connect to Enterprise Portal running locally
CODY_GATEWAY_ENTERPRISE_PORTAL_URL: http://localhost:6081
externalSecrets:
SAMS_CLIENT_ID:
project: sourcegraph-local-dev
name: SG_LOCAL_DEV_SAMS_CLIENT_ID
SAMS_CLIENT_SECRET:
project: sourcegraph-local-dev
name: SG_LOCAL_DEV_SAMS_CLIENT_SECRET
docsite:
runTarget: //doc:serve
searcher:
target: //cmd/searcher
syntax-highlighter:
target: //docker-images/syntax-highlighter:syntect_server
ignoreStdout: true
ignoreStderr: true
env:
# Environment copied from Dockerfile
WORKERS: '1'
ROCKET_ENV: 'production'
ROCKET_LIMITS: '{json=10485760}'
ROCKET_SECRET_KEY: 'SeerutKeyIsI7releuantAndknvsuZPluaseIgnorYA='
ROCKET_KEEP_ALIVE: '0'
ROCKET_PORT: '9238'
QUIET: 'true'
frontend:
description: Enterprise frontend
target: //cmd/frontend
precmd: |
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
# If EXTSVC_CONFIG_FILE is *unset*, set a default.
export EXTSVC_CONFIG_FILE=${EXTSVC_CONFIG_FILE-'../dev-private/enterprise/dev/external-services-config.json'}
env:
CONFIGURATION_MODE: server
USE_ENHANCED_LANGUAGE_DETECTION: false
SITE_CONFIG_FILE: '../dev-private/enterprise/dev/site-config.json'
SITE_CONFIG_ESCAPE_HATCH_PATH: '$HOME/.sourcegraph/site-config.json'
# frontend processes need this to be so that the paths to the assets are rendered correctly
WEB_BUILDER_DEV_SERVER: 1
worker:
target: //cmd/worker
precmd: |
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
repo-updater:
target: //cmd/repo-updater
precmd: |
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
symbols:
target: //cmd/symbols
checkBinary: .bin/symbols
env:
CTAGS_COMMAND: dev/universal-ctags-dev
SCIP_CTAGS_COMMAND: dev/scip-ctags-dev
CTAGS_PROCESSES: 2
USE_ROCKSKIP: 'false'
gitserver-template: &gitserver_bazel_template
target: //cmd/gitserver
env: &gitserverenv
HOSTNAME: 127.0.0.1:3178
GITSERVER_MEMORY_OBSERVATION_ENABLED: 'true'
# This is only here to stay backwards-compatible with people's custom
# `sg.config.overwrite.yaml` files
gitserver:
<<: *gitserver_bazel_template
gitserver-0:
<<: *gitserver_bazel_template
env:
<<: *gitserverenv
GITSERVER_EXTERNAL_ADDR: 127.0.0.1:3501
GITSERVER_ADDR: 127.0.0.1:3501
SRC_REPOS_DIR: $HOME/.sourcegraph/repos_1
SRC_PROF_HTTP: 127.0.0.1:3551
gitserver-1:
<<: *gitserver_bazel_template
env:
<<: *gitserverenv
GITSERVER_EXTERNAL_ADDR: 127.0.0.1:3502
GITSERVER_ADDR: 127.0.0.1:3502
SRC_REPOS_DIR: $HOME/.sourcegraph/repos_2
SRC_PROF_HTTP: 127.0.0.1:3552
codeintel-worker:
precmd: |
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(cat ../dev-private/enterprise/dev/test-license-generation-key.pem)
target: //cmd/precise-code-intel-worker
executor-template: &executor_template_bazel
target: //cmd/executor
env:
EXECUTOR_QUEUE_NAME: TEMPLATE
TMPDIR: $HOME/.sourcegraph/executor-temp
# Required for frontend and executor to communicate
EXECUTOR_FRONTEND_URL: http://localhost:3080
# Must match the secret defined in the site config.
EXECUTOR_FRONTEND_PASSWORD: hunter2hunter2hunter2
# Disable firecracker inside executor in dev
EXECUTOR_USE_FIRECRACKER: false
codeintel-executor:
<<: *executor_template_bazel
env:
EXECUTOR_QUEUE_NAME: codeintel
TMPDIR: $HOME/.sourcegraph/indexer-temp
dockerCommands:
batcheshelper-builder:
# Nothing to run for this, we just want to re-run the install script every time.
cmd: exit 0
target: //cmd/batcheshelper:image_tarball
image: batcheshelper:candidate
env:
# TODO: This is required but should only be set on M1 Macs.
PLATFORM: linux/arm64
continueWatchOnExit: true
grafana:
target: //docker-images/grafana:image_tarball
docker:
image: grafana:candidate
ports:
- 3370
flags:
cpus: 1
memory: 1g
volumes:
- from: $HOME/.sourcegraph-dev/data/grafana
to: /var/lib/grafana
- from: $(pwd)/dev/grafana/all
to: /sg_config_grafana/provisioning/datasources
linux:
flags:
# Linux needs an extra arg to support host.internal.docker, which is how grafana connects
# to the prometheus backend.
add-host: host.docker.internal:host-gateway
# Docker users on Linux will generally be using direct user mapping, which
# means that they'll want the data in the volume mount to be owned by the
# same user as is running this script. Fortunately, the Grafana container
# doesn't really care what user it runs as, so long as it can write to
# /var/lib/grafana.
user: $UID
# Log file location: since we log outside of the Docker container, we should
# log somewhere that's _not_ ~/.sourcegraph-dev/data/grafana, since that gets
# volume mounted into the container and therefore has its own ownership
# semantics.
# Now for the actual logging. Grafana's output gets sent to stdout and stderr.
# We want to capture that output, but because it's fairly noisy, don't want to
# display it in the normal case.
logfile: $HOME/.sourcegraph-dev/logs/grafana/grafana.log
env:
# docker containers must access things via docker host on non-linux platforms
CACHE: false
otel-collector:
target: //docker-images/opentelemetry-collector:image_tarball
description: OpenTelemetry collector
args: '--config "/etc/otel-collector/$CONFIGURATION_FILE"'
docker:
image: opentelemetry-collector:candidate
ports:
- 4317
- 4318
- 55679
- 55670
- 8888
linux:
flags:
# Jaeger generally runs outside of Docker, so to access it we need to be
# able to access ports on the host, because the Docker host only exists on
# MacOS. --net=host is a very dirty way of enabling this.
net: host
env:
JAEGER_HOST: localhost
env:
JAEGER_HOST: host.docker.internal
# Overwrite the following in sg.config.overwrite.yaml, based on which collector
# config you are using - see docker-images/opentelemetry-collector for more details.
CONFIGURATION_FILE: 'configs/jaeger.yaml'
postgres_exporter:
target: //docker-images/postgres_exporter:image_tarball
docker:
image: postgres-exporter:candidate
flags:
cpus: 1
memory: 1g
ports:
- 9187
linux:
flags:
# Linux needs an extra arg to support host.internal.docker, which is how
# postgres_exporter connects to the prometheus backend.
add-host: host.docker.internal:host-gateway
net: host
precmd: |
# Use psql to read the effective values for PG* env vars (instead of, e.g., hardcoding the default
# values).
get_pg_env() { psql -c '\set' | grep "$1" | cut -f 2 -d "'"; }
PGHOST=${PGHOST-$(get_pg_env HOST)}
PGUSER=${PGUSER-$(get_pg_env USER)}
PGPORT=${PGPORT-$(get_pg_env PORT)}
# we need to be able to query migration_logs table
PGDATABASE=${PGDATABASE-$(get_pg_env DBNAME)}
ADJUSTED_HOST=${PGHOST:-127.0.0.1}
if [[ ("$ADJUSTED_HOST" == "localhost" || "$ADJUSTED_HOST" == "127.0.0.1" || -f "$ADJUSTED_HOST") && "$OSTYPE" != "linux-gnu" ]]; then
ADJUSTED_HOST="host.docker.internal"
fi
env:
DATA_SOURCE_NAME: postgresql://${PGUSER}:${PGPASSWORD}@${ADJUSTED_HOST}:${PGPORT}/${PGDATABASE}?sslmode=${PGSSLMODE:-disable}
prometheus:
target: //docker-images/prometheus:image_tarball
logfile: $HOME/.sourcegraph-dev/logs/prometheus/prometheus.log
docker:
image: prometheus:candidate
volumes:
- from: $HOME/.sourcegraph-dev/data/prometheus
to: /prometheus
- from: $(pwd)/$CONFIG_DIR
to: /sg_prometheus_add_ons
flags:
cpus: 1
memory: 4g
ports:
- 9090
linux:
flags:
net: host
user: $UID
env:
PROM_TARGETS: dev/prometheus/linux/prometheus_targets.yml
SRC_FRONTEND_INTERNAL: localhost:3090
precmd: cp ${PROM_TARGETS} "${CONFIG_DIR}"/prometheus_targets.yml
env:
CONFIG_DIR: docker-images/prometheus/config
PROM_TARGETS: dev/prometheus/all/prometheus_targets.yml
SRC_FRONTEND_INTERNAL: host.docker.internal:3090
SRC_LOG_LEVEL: info
SRC_DEVELOPMENT: true
DISABLE_SOURCEGRAPH_CONFIG: false
DISABLE_ALERTMANAGER: false
PROMETHEUS_ADDITIONAL_FLAGS: '--web.enable-lifecycle --web.enable-admin-api'
syntax-highlighter:
ignoreStdout: true
ignoreStderr: true
docker:
image: sourcegraph/syntax-highlighter:insiders
pull: true
ports:
- 9238
env:
WORKERS: 1
ROCKET_ADDRESS: 0.0.0.0
#
# CommandSets ################################################################
#
defaultCommandset: enterprise
commandsets:
enterprise-bazel: &enterprise_bazel_set
checks:
- redis
- postgres
- git
- bazelisk
- ibazel
- dev-private
bazelCommands:
- blobstore
- docsite
- frontend
- worker
- repo-updater
- gitserver-0
- gitserver-1
- searcher
- symbols
# - syntax-highlighter
commands:
- web
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- caddy
# If you modify this command set, please consider also updating the dotcom runset.
enterprise: &enterprise_set
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- worker
- repo-updater
- web
- gitserver-0
- gitserver-1
- searcher
- caddy
- symbols
# TODO https://github.com/sourcegraph/devx-support/issues/537
# - docsite
- syntax-highlighter
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- blobstore
- embeddings
env:
DISABLE_CODE_INSIGHTS_HISTORICAL: false
DISABLE_CODE_INSIGHTS: false
enterprise-e2e:
<<: *enterprise_set
env:
# EXTSVC_CONFIG_FILE being set prevents the e2e test suite to add
# additional connections.
EXTSVC_CONFIG_FILE: ''
dotcom:
# This is 95% the enterprise runset, with the addition of Cody Gateway.
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- worker
- repo-updater
- web
- gitserver-0
- gitserver-1
- searcher
- symbols
- caddy
- syntax-highlighter
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- blobstore
- embeddings
- cody-gateway
- enterprise-portal # required by Cody Gateway
env:
SOURCEGRAPHDOTCOM_MODE: true
codeintel-bazel: &codeintel_bazel_set
checks:
- docker
- redis
- postgres
- git
- bazelisk
- ibazel
- dev-private
bazelCommands:
- blobstore
- frontend
- worker
- repo-updater
- gitserver-0
- gitserver-1
- searcher
- symbols
- syntax-highlighter
- codeintel-worker
- codeintel-executor
commands:
- web
- docsite
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- caddy
- jaeger
- grafana
- prometheus
codeintel-syntactic:
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- web
- worker
- blobstore
- repo-updater
- gitserver-0
- gitserver-1
- syntactic-code-intel-worker-0
- syntactic-code-intel-worker-1
codeintel:
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- worker
- repo-updater
- web
- gitserver-0
- gitserver-1
- searcher
- symbols
- caddy
- docsite
- syntax-highlighter
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- blobstore
- codeintel-worker
- codeintel-executor
# - otel-collector
- jaeger
- grafana
- prometheus
codeintel-kubernetes:
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- worker
- repo-updater
- web
- gitserver-0
- gitserver-1
- searcher
- symbols
- caddy
- docsite
- syntax-highlighter
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- blobstore
- codeintel-worker
- codeintel-executor-kubernetes
# - otel-collector
- jaeger
- grafana
- prometheus
enterprise-codeintel:
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- worker
- repo-updater
- web
- gitserver-0
- gitserver-1
- searcher
- symbols
- caddy
- docsite
- syntax-highlighter
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- blobstore
- codeintel-worker
- codeintel-executor
- otel-collector
- jaeger
- grafana
- prometheus
enterprise-codeintel-multi-queue-executor:
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- worker
- repo-updater
- web
- gitserver-0
- gitserver-1
- searcher
- symbols
- caddy
- docsite
- syntax-highlighter
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- blobstore
- codeintel-worker
- multiqueue-executor
# - otel-collector
- jaeger
- grafana
- prometheus
enterprise-codeintel-bazel:
<<: *codeintel_bazel_set
enterprise-codeinsights:
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- worker
- repo-updater
- web
- gitserver-0
- gitserver-1
- searcher
- symbols
- caddy
- docsite
- syntax-highlighter
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- blobstore
env:
DISABLE_CODE_INSIGHTS_HISTORICAL: false
DISABLE_CODE_INSIGHTS: false
api-only:
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- worker
- repo-updater
- gitserver-0
- gitserver-1
- searcher
- symbols
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- blobstore
batches:
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- worker
- repo-updater
- web
- gitserver-0
- gitserver-1
- searcher
- symbols
- caddy
- docsite
- syntax-highlighter
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- blobstore
- batches-executor
- batcheshelper-builder
batches-kubernetes:
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- worker
- repo-updater
- web
- gitserver-0
- gitserver-1
- searcher
- symbols
- caddy
- docsite
- syntax-highlighter
- zoekt-index-0
- zoekt-index-1
- zoekt-web-0
- zoekt-web-1
- blobstore
- batches-executor-kubernetes
- batcheshelper-builder
iam:
checks:
- docker
- redis
- postgres
- git
- dev-private
commands:
- frontend
- repo-updater
- web
- gitserver-0
- gitserver-1
- caddy
monitoring:
checks:
- docker
commands:
- jaeger
dockerCommands:
- otel-collector
- prometheus
- grafana
- postgres_exporter
monitoring-og:
checks:
- docker
commands:
- jaeger
- otel-collector
- prometheus
- grafana
- postgres_exporter
monitoring-alerts:
checks:
- docker
- redis
- postgres
commands:
- prometheus
- grafana
# For generated alerts docs
- docsite
# For the alerting integration with frontend
- frontend
- web
- caddy
web-standalone:
commands:
- web-standalone-http
- caddy
web-sveltekit-standalone:
commands:
- web-sveltekit-standalone
- caddy
env:
SK_PORT: 3080
# For testing our OpenTelemetry stack
otel:
checks:
- docker
commands:
- otel-collector
- jaeger
# NOTE: This is an experimental way of running a subset of Sourcegraph. See
# cmd/sourcegraph/README.md.
single-program-experimental-blame-sqs:
checks:
- git
- dev-private
- redis
commands:
- sourcegraph
- web
- caddy
env:
# Faster builds in local dev.
DEV_WEB_BUILDER_NO_SPLITTING: 1
cody-gateway:
checks:
- redis
commands:
- cody-gateway
cody-gateway-bazel:
checks:
- redis
bazelCommands:
- cody-gateway
tests:
# These can be run with `sg test [name]`
backend:
cmd: go test
defaultArgs: ./...
bazel-backend-integration:
cmd: |
export GHE_GITHUB_TOKEN=$(gcloud secrets versions access latest --secret=GHE_GITHUB_TOKEN --quiet --project=sourcegraph-ci)
export GH_TOKEN=$(gcloud secrets versions access latest --secret=GITHUB_TOKEN --quiet --project=sourcegraph-ci)
export BITBUCKET_SERVER_USERNAME=$(gcloud secrets versions access latest --secret=BITBUCKET_SERVER_USERNAME --quiet --project=sourcegraph-ci)
export BITBUCKET_SERVER_TOKEN=$(gcloud secrets versions access latest --secret=BITBUCKET_SERVER_TOKEN --quiet --project=sourcegraph-ci)
export BITBUCKET_SERVER_URL=$(gcloud secrets versions access latest --secret=BITBUCKET_SERVER_URL --quiet --project=sourcegraph-ci)
export PERFORCE_PASSWORD=$(gcloud secrets versions access latest --secret=PERFORCE_PASSWORD --quiet --project=sourcegraph-ci)
export PERFORCE_USER=$(gcloud secrets versions access latest --secret=PERFORCE_USER --quiet --project=sourcegraph-ci)
export PERFORCE_PORT=$(gcloud secrets versions access latest --secret=PERFORCE_PORT --quiet --project=sourcegraph-ci)
export SOURCEGRAPH_LICENSE_KEY=$(gcloud secrets versions access latest --secret=SOURCEGRAPH_LICENSE_KEY --quiet --project=sourcegraph-ci)
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(gcloud secrets versions access latest --secret=SOURCEGRAPH_LICENSE_GENERATION_KEY --quiet --project=sourcegraph-ci)
bazel test //testing:backend_integration_test --verbose_failures --sandbox_debug
bazel-e2e:
cmd: |
export GHE_GITHUB_TOKEN=$(gcloud secrets versions access latest --secret=GHE_GITHUB_TOKEN --quiet --project=sourcegraph-ci)
export GH_TOKEN=$(gcloud secrets versions access latest --secret=GITHUB_TOKEN --quiet --project=sourcegraph-ci)
export SOURCEGRAPH_LICENSE_KEY=$(gcloud secrets versions access latest --secret=SOURCEGRAPH_LICENSE_KEY --quiet --project=sourcegraph-ci)
export SOURCEGRAPH_LICENSE_GENERATION_KEY=$(gcloud secrets versions access latest --secret=SOURCEGRAPH_LICENSE_GENERATION_KEY --quiet --project=sourcegraph-ci)
bazel test //testing:e2e_test --test_env=HEADLESS=false --test_env=SOURCEGRAPH_BASE_URL="http://localhost:7080" --test_env=GHE_GITHUB_TOKEN=$GHE_GITHUB_TOKEN --test_env=GH_TOKEN=$GH_TOKEN --test_env=DISPLAY=$DISPLAY
bazel-web-integration:
cmd: |
export GH_TOKEN=$(gcloud secrets versions access latest --secret=GITHUB_TOKEN --quiet --project=sourcegraph-ci)
export PERCY_TOKEN=$(gcloud secrets versions access latest --secret=PERCY_TOKEN --quiet --project=sourcegraph-ci)
bazel test //client/web/src/integration:integration-tests --test_env=HEADLESS=false --test_env=SOURCEGRAPH_BASE_URL="http://localhost:7080" --test_env=GH_TOKEN=$GH_TOKEN --test_env=DISPLAY=$DISPLAY --test_env=PERCY_TOKEN=$PERCY_TOKEN
backend-integration:
cmd: cd dev/gqltest && go test -long -base-url $BASE_URL -email $EMAIL -username $USERNAME -password $PASSWORD ./gqltest
env:
# These are defaults. They can be overwritten by setting the env vars when
# running the command.
BASE_URL: 'http://localhost:3080'
PASSWORD: '12345'
bext:
cmd: pnpm --filter @sourcegraph/browser test
bext-build:
cmd: EXTENSION_PERMISSIONS_ALL_URLS=true pnpm --filter @sourcegraph/browser build
bext-integration:
cmd: pnpm --filter @sourcegraph/browser test-integration
bext-e2e:
cmd: pnpm --filter @sourcegraph/browser mocha ./src/end-to-end/github.test.ts ./src/end-to-end/gitlab.test.ts
env:
SOURCEGRAPH_BASE_URL: https://sourcegraph.com
client:
cmd: pnpm run test
docsite:
cmd: .bin/docsite_${DOCSITE_VERSION} check ./doc
env:
DOCSITE_VERSION: v1.9.4 # Update DOCSITE_VERSION everywhere in all places (including outside this repo)
web-e2e:
preamble: |
A Sourcegraph instance must be already running for these tests to work, most
commonly with: `sg start enterprise-e2e`
See more details: https://docs-legacy.sourcegraph.com/dev/how-to/testing#running-end-to-end-tests
cmd: pnpm test-e2e
env:
TEST_USER_EMAIL: [email protected]
TEST_USER_PASSWORD: supersecurepassword
SOURCEGRAPH_BASE_URL: https://sourcegraph.test:3443
BROWSER: chrome
externalSecrets:
GH_TOKEN:
project: 'sourcegraph-ci'
name: 'BUILDKITE_GITHUBDOTCOM_TOKEN'
web-regression:
preamble: |
A Sourcegraph instance must be already running for these tests to work, most
commonly with: `sg start enterprise-e2e`
See more details: https://docs-legacy.sourcegraph.com/dev/how-to/testing#running-regression-tests
cmd: pnpm test-regression
env:
SOURCEGRAPH_SUDO_USER: test
SOURCEGRAPH_BASE_URL: https://sourcegraph.test:3443
TEST_USER_PASSWORD: supersecurepassword
BROWSER: chrome
web-integration:
preamble: |
A web application should be built for these tests to work, most
commonly with: `sg run web-integration-build` or `sg run web-integration-build-prod` for production build.
See more details: https://docs-legacy.sourcegraph.com/dev/how-to/testing#running-integration-tests
cmd: pnpm test-integration
web-integration:debug:
preamble: |
A Sourcegraph instance must be already running for these tests to work, most
commonly with: `sg start web-standalone`
See more details: https://docs-legacy.sourcegraph.com/dev/how-to/testing#running-integration-tests
cmd: pnpm test-integration:debug
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
HTML/CSS
1
https://gitee.com/dffy/Sourcegraph.git
[email protected]:dffy/Sourcegraph.git
dffy
Sourcegraph
Sourcegraph
main

搜索帮助