Warning: This is obsolete.
When you want to create a k8s cluster dedicated for your Knative experiments, this seems like a way to go for the current GKE and Knative 0.3.0:
$CLUSTER_NAME = "test-cluster"
$USER_NAME = "SomeUser@gmail.com"
gcloud services enable `
cloudapis.googleapis.com `
container.googleapis.com `
containerregistry.googleapis.com
gcloud container clusters create $CLUSTER_NAME --zone=europe-west1-b --cluster-version=latest --machine-type=n1-standard-4 --enable-autoscaling --min-nodes=1 --max-nodes=10 --enable-autorepair --scopes=service-control,service-management,compute-rw,storage-ro,cloud-platform,logging-write,monitoring-write,pubsub,datastore --num-nodes=3
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$USER_NAME
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio-crds.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/istio.yaml
kubectl label namespace default istio-injection=enabled
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/serving.yaml --filename https://github.com/knative/build/releases/download/v0.3.0/release.yaml --filename https://github.com/knative/eventing/releases/download/v0.3.0/eventing.yaml --filename https://github.com/knative/eventing-sources/releases/download/v0.3.0/release.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-metrics-prometheus.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-logs-elasticsearch.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.3.0/monitoring-tracing-zipkin.yaml
This is mostly described in the doc page Knative Install on Google Kubernetes Engine. I find it useful to bypass automated user name checkout (gcloud config get-value core/account
) as there are some problems with capitalization I ran into.
Another docs page to follow is Hello World - .NET Core sample. Please be aware that the description there depends on the current version of SDK your dotnet
is running against:
> dotnet --info
Version: 2.2.103
> dotnet new web -o helloworld-csharp
[...]
> cat helloworld-csharp/helloworld-csharp.csproj
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>netcoreapp2.2</TargetFramework>
<AspNetCoreHostingModel>InProcess</AspNetCoreHostingModel>
<RootNamespace>helloworld_csharp</RootNamespace>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore.App" />
<PackageReference Include="Microsoft.AspNetCore.Razor.Design" Version="2.2.0" PrivateAssets="All" />
</ItemGroup>
</Project>
> rm -Recurse -Force helloworld-csharp
> '{ "sdk": { "version": "2.1.103" } }' > global.json
> dotnet --info
A JSON parsing exception occurred in [C:\Source\temp\global.json]: * Line 1, Column 2 Syntax error: Malformed token
Version: 2.2.103
> ... removing BOM from global.json, wtf https://github.com/dotnet/core-setup/issues/185 ...
> dotnet --info
Version: 2.1.103
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>netcoreapp2.0</TargetFramework>
</PropertyGroup>
<ItemGroup>
<Folder Include="wwwroot\" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.6" />
</ItemGroup>
</Project>
I also recommend to replace the Dockerfile included with something built in two stages like:
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /app
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish --configuration Release -r alpine-x64 --output /published
FROM microsoft/dotnet:2.2-runtime-deps-alpine
COPY --from=build /published /published
ENV PORT 8080
EXPOSE $PORT
ENTRYPOINT ["/published/helloworld"]
The difference in size is ten-fold (500 MB for SDK image, 50 MB for runtime Alpine image).
After this is done, it’s possible to build and push the image and kubectl
the service yaml as provided in the docs. Now we can
$ISTIO_INGRESS = (kubectl get svc istio-ingressgateway --namespace istio-system | Select-Object -Skip 1) -split '\s+' | Select-Object -Index 3
while (1) { Invoke-WebRequest -Headers @{ "Host" = "helloworld-csharp.default.example.com" } http://$ISTIO_INGRESS }
to generate some traffic to observe. After calling kubectl proxy
in another terminal, we can call
- Kibana at http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana
- Zipkin at http://localhost:8001/api/v1/namespaces/istio-system/services/zipkin:9411/proxy/zipkin/
To look into Prometheus, you need to run
kubectl port-forward --namespace knative-monitoring $(kubectl get pods --namespace knative-monitoring --selector=app=grafana --output=jsonpath="{.items..metadata.name}") 3000
and then check http://localhost:3000.
There are several things to do now:
- We need to configure a real domain. For that, we need to give Istio a static IP.
- Our application should be able to enhance monitoring and trace data that are, so far, generated by Knative wrappers.
- Also, logs!
- Health routes are important for this kind of infrastructure. We should look into how the new ASP.NET support for health calls works with Knative.
For today though, we’ve made a good progress, so let’s kill the cluster to limit spending until we find some more time to devote to the project:
gcloud container clusters delete $CLUSTER_NAME --zone=europe-west1-b