You can configure how the Kubernetes cluster makes use of resources (such as CPU an Memory) for each pods.Continue reading “[Kubernetes] Resources”
[Kubernetes] Service Accounts
You can use specialized ServiceAccounts with restricted permissions to allow containers to access the Kubernetes API.
- Every namespace has a default service account.
- Each service account has a matching secret object, which has a token.
- When a pod is created, a service account token is mounted automatically.
- The pod is accessing Kubernetes APIs using the mounted service account token.
Kubernetes organizes and launches container processes. You can configure which user or group will launch the process in a docker level or in a Kubernetes level.Continue reading “[Kubernetes] SecurityContext”
When you run a container in a pod, you might want to run a command at the start-up. The process consists of 2 stages – at the container (docker) level and at the Kubernetes level.Continue reading “[Kubernetes] Commands”
[Spark By Example] Spark SQL – UDFs
In Spark SQL, you can define your custom functions and use them in the SQL statement. The following example shows how to create a very simple UDF, register it, and use it in the SQL.Continue reading “[Spark By Example] Spark SQL – UDFs”
[Spark By Example] DataFrameReader
DataFrameReader is an interface to load a DataFrame from external sources.
You cannot create the DataFrameReader object, but you can access it through the “SparkSession.read” property.Continue reading “[Spark By Example] DataFrameReader”
[Spark By Example] Spark SQL – Grouping
Let’s play with Spark SQL more.
[Note] When the underlying DataFrame schema is changed, the view should be updated again.Continue reading “[Spark By Example] Spark SQL – Grouping”
[Spark By Example] Spark SQL – TempView
With Spark SQL, you can use the familiar SQL syntax to query the data.Continue reading “[Spark By Example] Spark SQL – TempView”