application.yml

spring:
  datasource:
    url: jdbc:db2://localhost:25000/SAMPLE
    username: "아이디"
    password: "패스워드"
    driverClassName: com.ibm.db2.jcc.DB2Driver
    testWhileIdle: true
    validationQuery: SELECT 1
  jpa:
    show-sql: true
    generate-ddl: false
    database-platform: DB2Platform
    hibernate:
      ddl-auto: none
      naming-strategy: org.hibernate.cfg.ImprovedNamingStrategy
    properties:
      hibernate:
        dialect: org.hibernate.dialect.DB2Dialect

mybatis:
  mapper-locations: mapper/TestMapper.xml
  #  mapper-locations: mapper/**/*.xml # mapper 기본경로  
 
 
 
pom.xml
 
<!-- mybatis-->
<dependency>    
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>2.2.1</version>
</dependency>

 

 
 
   <!-- db2 connection -->
<dependency>
 <groupId>com.ibm.db2.jcc</groupId>
 <artifactId>db2jcc4</artifactId>
 <version>10.1</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
<version>2.6.7</version>
</dependency>

 

C:\Program Files\nodejs

npm.cmd파일내용 중 prefix -g => prefix -location=global  로 변경
--------------------------------------
:: Created by npm, please don't edit manually.
@ECHO OFF

SETLOCAL

SET "NODE_EXE=%~dp0\node.exe"
IF NOT EXIST "%NODE_EXE%" (
  SET "NODE_EXE=node"
)

SET "NPM_CLI_JS=%~dp0\node_modules\npm\bin\npm-cli.js"
FOR /F "delims=" %%F IN ('CALL "%NODE_EXE%" "%NPM_CLI_JS%" prefix -location=global') DO (
  SET "NPM_PREFIX_NPM_CLI_JS=%%F\node_modules\npm\bin\npm-cli.js"
)
IF EXIST "%NPM_PREFIX_NPM_CLI_JS%" (
  SET "NPM_CLI_JS=%NPM_PREFIX_NPM_CLI_JS%"
)

"%NODE_EXE%" "%NPM_CLI_JS%" %*

--------------------------------------

document.addEventListener('DOMContentLoaded', function() {
var calendarEl = document.getElementById('calendar');

var calendar = new FullCalendar.Calendar(calendarEl, {
height: '100%',
expandRows: true,
headerToolbar: {
left: 'prev,next today',
center: 'title',
right: 'dayGridMonth,timeGridWeek,timeGridDay,listWeek'
},
 
navLinks: true, // can click day/week names to navigate views
editable: false,
selectable: false,
//selectMirror: true,
fixedWeekCount:false,
locale: 'ko',
businessHours: true,
nowIndicator: true,
dayMaxEvents: true, // allow "more" link when too many events

googleCalendarApiKey: '개인구글캘린더apikey',
//연동될 구글캘린더가 여러개인경우 eventSources를 이용하여 캘린더 머지함
eventSources: [
{
   googleCalendarId: '공개한내구글캘린더id@group.calendar.google.com',
   color: 'green'
},
{  //대한민국 공휴일
   googleCalendarId: 'ko.south_korea#holiday@group.v.calendar.google.com'
}
]
,eventClick: function(info){
   //클릭시 구글캘린더 url로 가는것을 막는다.
   info.jsEvent.stopPropagation();
   info.jsEvent.preventDefault();
}


/*
//연동할 구글캘린더가 1개인 경우 
events: {
   googleCalendarId: 'ko.south_korea#holiday@group.v.calendar.google.com'
},
eventClick: function(info){
info.jsEvent.stopPropagation();
info.jsEvent.preventDefault();
}
*/

});

calendar.render();
});

'오픈소스' 카테고리의 다른 글

install jenkins  (0) 2021.09.12
"Nexus Repository" install/service  (0) 2021.09.07

참고url)

Getting Start : https://docs.hazelcast.com/imdg/latest/getting-started.html

Download : https://hazelcast.org/imdg/download/

https://stackoverflow.com/questions/62828652/how-to-install-hazelcast-imdg-in-ubuntu-server

 

 

설치)

wget -qO - https://repository.hazelcast.com/api/gpg/key/public | sudo apt-key add -
echo "deb https://repository.hazelcast.com/debian stable main" | sudo tee -a /etc/apt/sources.list
sudo apt update && sudo apt install hazelcast

 

터미널3개 창을 열고 hz start 실행하여 Creating a Cluster.

 

 

java client source : 

//Installation notes: You have to have Hazelcast Java Client on the classpath.
//The simplest way of doing it is to put `hazelcast-all` JAR
//on the classpath, e.g. via Maven. See "Installation" chapter for details.

import com.hazelcast.client.HazelcastClient;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.map.IMap;

public class TesterFile {
 public static void main(String[] args) {
     // Start the client and connect to the cluster
     HazelcastInstance hz = HazelcastClient.newHazelcastClient();
     // Create a Distributed Map in the cluster
     IMap map = hz.getMap("my-distributed-map");
     //Standard Put and Get
     map.put("1", "John");
     map.put("2", "Mary");
     map.put("3", "Jane");
     // Shutdown the client
//     hz.shutdown();
    
 }

> java application으로 2회실행.

 

Cluster 관리화면

 

 

 

1. geth 

geth 란,  "go-ethereum"의 약자로 client software로 Go언어로 개발되었다.

 

dcans@dcans:~$ sudo apt-get install software-properties-common
dcans@dcans:~$ sudo add-apt-repository -y ppa:ethereum/ethereum
dcans@dcans:~$ sudo apt-get install ethereum     
dcans@dcans:~$ geth version
Geth
Version: 1.10.14-stable
Git Commit: 11a3a35097ec493d71137c9bfa433bceeedff6c0
Architecture: amd64
Go Version: go1.17.5
Operating System: linux
GOPATH=
GOROOT=go
dcans@dcans:~$ geth console
.... exit명령으로 중단
INFO [12-26|13:47:33.641] Blockchain stopped 

 

 

refer : https://roots2019.tistory.com/251

          https://steemit.com/@pangol

 

블록체인 Dapp 만들기 #1

블록체인 기반 탈중앙형 플랫폼 비즈니스 서비스를 만들어 보고자 합니다. 블록체인은 매일 큰 이슈를 불러오고 있는데 정작 그 기술과 쓰임새를 제대로 이해하는 사람들은 없는 것 같아 아쉬

roots2019.tistory.com

 

2. truffle 

truffle : ethereum framework, solidity소스코드 compile, deploy작업 지원.

             smart contract를 쉽게 테스트 할 수 있는 방법제공.

 

~ npm install -g truffle

dcans@dcans:~$ truffle version
Truffle v5.4.26 (core: 5.4.26)
Solidity v0.5.16 (solc-js)
Node v10.19.0
Web3.js v1.5.3
dcans@dcans:~$ 

 

3. ganache-cli 설치

ganache란, 가나슈는 이더리움 기반 블록체인 디앱 개발에 사용하는 개인용 블록체인으로 테스트 목적으로 PC에 설치해서 사용할 수 있다.

 

dcans@dcans:~$ sudo npm install -g ganache-cli
/usr/local/bin/ganache-cli -> /usr/local/lib/node_modules/ganache-cli/cli.js

> keccak@3.0.1 install /usr/local/lib/node_modules/ganache-cli/node_modules/keccak
> node-gyp-build || exit 0


> secp256k1@4.0.2 install /usr/local/lib/node_modules/ganache-cli/node_modules/secp256k1
> node-gyp-build || exit 0

+ ganache-cli@6.12.2
added 101 packages from 182 contributors in 11.529s
dcans@dcans:~$ 

 

dcans@dcans:~$ ganache-cli version
Ganache CLI v6.12.2 (ganache-core: 2.13.2)

Available Accounts (10개의 테스트 계정)
==================
(0) 0x98E804A6984c405B60c39bA7A2896f11EfAFfceb (100 ETH)
(1) 0xa27342217C808c5D044fff5AbcDA3f9C0ba4D154 (100 ETH)
(2) 0xA393dD19711014F75dDeFb954759737FD2B4329C (100 ETH)
(3) 0xE4132911167e4412B0492Dca731ffAEb64Eaa93f (100 ETH)
(4) 0x85151de1AF1a885A5c8B49b27cf2C4927f78CFe5 (100 ETH)
(5) 0x936208e2D3498502F00919bFc589d77e1109248A (100 ETH)
(6) 0x585B2BA68133eE15BcC90777Bd99c4FE3ACBBf10 (100 ETH)
(7) 0x7FbcF7598F1029645f3cf68818C65bDBacfea307 (100 ETH)
(8) 0x9A5a065E805f73eD70d71A5D7Ed81Ef94d81E6b4 (100 ETH)
(9) 0x3C4A081378408F43b46e911271C4C7564892d0eA (100 ETH)

Private Keys
==================
(0) 0xec80c48e6d27c664c4162d843cdb1b17f6fa24aab5ac73da637636514aca662f
(1) 0xe3392e20c260f732cf716bf821e984e8f2d34f0e6c5bb0df200d2886ea7e420f
(2) 0x201d570ef89b92c2451df91459842b8ec8460ac59ed7b0b2015005b6327c598f
(3) 0xbcde4e1f21920a01ebb5b9fea0e3d15ed2df0c56b5cbea906ab97b7a16b4c476
(4) 0xaa94bde342ad9e030084bfc47429eb81c6256c8603ead0bd7b72f3a55227d285
(5) 0x78cd50e2885d606726960ccdb8f6b2e3445a9d28ba143fd2eabd2dd68d981012
(6) 0x6841b44ffd2546fea33f00e2a695e81d1ca7e0eaf03a9c4d7d80445be6d6e123
(7) 0x5adb03c74394f864420ea39aeb827ed0622bfdcae2b97e62b59c3ae11c72d6c7
(8) 0xd97c2572e4a1fe26f42e8e0c5f64f1329053e80884bb59eb77c593f1813c6748
(9) 0x958aba2a124d004d103d714cec39e042d673ef95b97634ad76371ed6a54b623e

HD Wallet
==================
Mnemonic:      sorry tragic keep tank glow where apology surprise stem dragon found daring
Base HD Path:  m/44'/60'/0'/0/{account_index}

Gas Price
==================
20000000000

Gas Limit
==================
6721975

Call Gas Limit
==================
9007199254740991

Listening on 127.0.0.1:8545

8545  => Json-RPC(remote procedure call)를 받아들이는 포트

 

refer : https://sabarada.tistory.com/12?category=792127

로그파일
 > kafka producer(broker에게 특정 topic에 대한 데이타 push)
 > kafka consumer는 broker에게 특정 topic을 pull로 가져감.

아래 예시는
"로그(consumer데이타) => FILE tail => kafka producer(broker에게 전달)" 파이프 라인으로
콘솔로 consumer를 모니터링 하고 해당 데이타를 pjm1.log파일로 저장
콘솔로 pruducer로 json데이타를 broker에게 전달. 해당데이타는 무한루프 로직으로 데이타 생성함.
 

 

 

comsumer콘솔 :

~/kafka/logs$ ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic pjm1 --from-beginning > pjm1.log

 

producer콘솔 :
echo '[  {    "name": "jinmyung",    "age": 500,    "secretIdentity": "Andrew",    "powers": [      "Radiation resistance",      "Turning tiny",      "Radiation blast"    ]  }]'  | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic pjm1

 

 

'빅데이타' 카테고리의 다른 글

streamsets 사용하기  (0) 2021.11.21
kafka 설치 및 실행  (0) 2021.11.21

https://streamsets.com/ 로그인

 

Deployments 메뉴를 이용하여 deployment task진행

: A deployment is a group of identical engine instances deployed within an environment.

intall script를 로컬환경에서 실행하면

streamSets collector를 다운로드 받음.

위 스크립트가 수행되면 아래 경로의 파일을 다운로드 함. (오류가 있으면 브라우저를 통해 다운로드 진행)

 

Starting download of 

https://archives.streamsets.com/datacollector/4.1.0/tarball/streamsets-datacollector-core-4.1.0.tgz

 

 

dcans@dcans:~/.streamsets/download/dc/collector$ sudo bin/streamsets dc

Java 1.8 detected; adding $SDC_JAVA8_OPTS of "-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -Djdk.nio.maxCachedBufferSize=262144" to $SDC_JAVA_OPTS
INFO - Starting engine
Logging initialized @5466ms to org.eclipse.jetty.util.log.Slf4jLog
Running on URI : 'http://dcans:18630'

 

 

 

 

 

 

 

 

 

-------------------------------------------------------

StreamSets collector를 구동하기 위해 jdk 1.8이상 필요.

버전이 맞지 않아 기존 jdk 삭제후 재설치 함

 

jdk삭제하는 법

Remove OpenJDK,

sudo apt remove openjdk*

Remove OpenJDK along with dependencies,

sudo apt remove --auto-remove openjdk*

Remove OpenJDK and the configuration files

sudo apt purge openjdk*

 

 

'빅데이타' 카테고리의 다른 글

로그파일 > streamsets > kafka producer  (0) 2021.11.28
kafka 설치 및 실행  (0) 2021.11.21

 

kafka@dcans:~/logs$ sudo systemctl status zookeeper
● zookeeper.service
Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-11-21 18:21:47 KST; 27min ago
Main PID: 949 (java)
Tasks: 39 (limit: 8176)
Memory: 77.5M
CGroup: /system.slice/zookeeper.service
└─949 java -Xmx512M -Xms512M -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=tr> INFO Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /tmp/zookeeper/version-2 sn>

INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.Ser>
INFO Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, >
INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase)
INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor)
INFO Using checkIntervalMs=60000 maxPerMinute=10000 (org.apache.zookeeper.server.ContainerManager)
INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog)
INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
lines 1-19/19 (END)

 

kafka@dcans:~/logs$ sudo systemctl status kafka
● kafka.service
     Loaded: loaded (/etc/systemd/system/kafka.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2021-11-21 18:44:01 KST; 4min 14s ago
   Main PID: 7383 (sh)
      Tasks: 72 (limit: 8176)
     Memory: 331.1M
     CGroup: /system.slice/kafka.service
             ├─7383 /bin/sh -c /home/kafka/kafka/bin/kafka-server-start.sh /home/kafka/kafka/config/server.properties > /home/kafka/kafka/kafka.log 2>&1
             └─7384 java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true >

11월 21 18:44:01 dcans systemd[1]: Started kafka.service.
lines 1-11/11 (END)

 

 

 

kafka start 시 아래 에러 발생시 "meta.properties파일 삭제후  카프카 재시작한다".

kafka.common.InconsistentClusterIdException: The Cluster ID WX_SndaJRfmYqLQRDavlVg doesn't match stored clusterId Some(uJ8xz_r_SuKKvuQOCK0zgg) in meta.properties.

 

/home/kafka/kafka/config/server.properties

의 로그 폴더 확인

############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/home/kafka/logs

meta.properties파일 삭제후

카프카 재실행

 

 

kafka 설치 및 테스트 참고url)

https://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-ubuntu-20-04

 

How To Install Apache Kafka on Ubuntu 20.04 | DigitalOcean

Apache Kafka is a popular distributed message broker designed to handle large volumes of real-time data. In this tutorial, you will install and use Apache Kafka 2.6.1 on Ubuntu 20.04.

www.digitalocean.com

 

producer, consumer

'빅데이타' 카테고리의 다른 글

로그파일 > streamsets > kafka producer  (0) 2021.11.28
streamsets 사용하기  (0) 2021.11.21


1. cpu 수만큼의 cpu 상세정보

dcans@dcans:~$ cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 22
model : 48
model name : AMD A8-6410 APU with AMD Radeon R5 Graphics
stepping : 1
microcode : 0x7030105
cpu MHz : 1271.893
cache size : 2048 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13


2. lscpu 명령 


dcans@dcans:~$ lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   40 bits physical, 48 bits virtual
CPU(s):                          4
On-line CPU(s) list:             0-3
Thread(s) per core:              1
Core(s) per socket:              4
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       AuthenticAMD
CPU family:                      22
Model:                           48
Model name:                      AMD A8-6410 APU with AMD Radeon R5 Graphics
Stepping:                        1
Frequency boost:                 enabled
CPU MHz:                         1400.000
CPU max MHz:                     2000.0000
CPU min MHz:                     1000.0000
BogoMIPS:                        3992.33
Virtualization:                  AMD-V
L1d cache:                       128 KiB
L1i cache:                       128 KiB
L2 cache:                        2 MiB
NUMA node0 CPU(s):               0-3
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Full AMD retpoline, STIBP disabled, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

dcans@dcans:~$ lsb_release -a
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal

 

 

1. stop running jenkins

dcans@dcans:~$ sudo service jenkins stop

 

2. remove jenkins
dcans@dcans:~$ sudo apt remove jenkins
패키지 목록을 읽는 중입니다... 완료
의존성 트리를 만드는 중입니다       
상태 정보를 읽는 중입니다... 완료
다음 패키지가 자동으로 설치되었지만 더 이상 필요하지 않습니다:
  daemon linux-headers-5.11.0-36-generic linux-hwe-5.11-headers-5.11.0-36
  linux-image-5.11.0-36-generic linux-modules-5.11.0-36-generic
  linux-modules-extra-5.11.0-36-generic
'sudo apt autoremove'를 이용하여 제거하십시오.
다음 패키지를 지울 것입니다:
  jenkins
0개 업그레이드, 0개 새로 설치, 1개 제거 및 12개 업그레이드 안 함.
1개를 완전히 설치하지 못했거나 지움.
이 작업 후 72.4 M바이트의 디스크 공간이 비워집니다.
계속 하시겠습니까? [Y/n] y
(데이터베이스 읽는중 ...현재 275938개의 파일과 디렉터리가 설치되어 있습니다.)
jenkins (2.303.2)를 제거합니다...


dcans@dcans:~$ sudo apt-get remove --purge jenkins
패키지 목록을 읽는 중입니다... 완료
의존성 트리를 만드는 중입니다       
상태 정보를 읽는 중입니다... 완료
다음 패키지가 자동으로 설치되었지만 더 이상 필요하지 않습니다:
  daemon linux-headers-5.11.0-36-generic linux-hwe-5.11-headers-5.11.0-36
  linux-image-5.11.0-36-generic linux-modules-5.11.0-36-generic
  linux-modules-extra-5.11.0-36-generic
'sudo apt autoremove'를 이용하여 제거하십시오.
다음 패키지를 지울 것입니다:
  jenkins*
0개 업그레이드, 0개 새로 설치, 1개 제거 및 12개 업그레이드 안 함.
이 작업 후 0 바이트의 디스크 공간을 더 사용하게 됩니다.
계속 하시겠습니까? [Y/n] y
(데이터베이스 읽는중 ...현재 275933개의 파일과 디렉터리가 설치되어 있습니다.)
Purging configuration files for jenkins (2.303.2) ...
groupdel: 'jenkins' 그룹이 없습니다
Processing triggers for systemd (245.4-4ubuntu3.13) ...

 

dcans@dcans:~$ sudo apt-get remove --auto-remove jenkins
패키지 목록을 읽는 중입니다... 완료
의존성 트리를 만드는 중입니다       
상태 정보를 읽는 중입니다... 완료
패키지 'jenkins'는 설치되어 있지 않아, 지우지 않았습니다.
다음 패키지를 지울 것입니다:
  daemon linux-headers-5.11.0-36-generic linux-hwe-5.11-headers-5.11.0-36
  linux-image-5.11.0-36-generic linux-modules-5.11.0-36-generic
  linux-modules-extra-5.11.0-36-generic
0개 업그레이드, 0개 새로 설치, 6개 제거 및 12개 업그레이드 안 함.
이 작업 후 402 M바이트의 디스크 공간이 비워집니다.
계속 하시겠습니까? [Y/n] y
(데이터베이스 읽는중 ...현재 275927개의 파일과 디렉터리가 설치되어 있습니다.)
daemon (0.6.4-1build2)를 제거합니다...
linux-headers-5.11.0-36-generic (5.11.0-36.40~20.04.1)를 제거합니다...
linux-hwe-5.11-headers-5.11.0-36 (5.11.0-36.40~20.04.1)를 제거합니다...
linux-modules-extra-5.11.0-36-generic (5.11.0-36.40~20.04.1)를 제거합니다...
linux-image-5.11.0-36-generic (5.11.0-36.40~20.04.1)를 제거합니다...
/etc/kernel/postrm.d/initramfs-tools:
update-initramfs: Deleting /boot/initrd.img-5.11.0-36-generic
/etc/kernel/postrm.d/zz-update-grub:
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
grub 설정 파일을 형성합니다 ...
리눅스 이미지를 찾았습니다: /boot/vmlinuz-5.11.0-40-generic
initrd 이미지를 찾았습니다: /boot/initrd.img-5.11.0-40-generic
리눅스 이미지를 찾았습니다: /boot/vmlinuz-5.11.0-37-generic
initrd 이미지를 찾았습니다: /boot/initrd.img-5.11.0-37-generic
Adding boot menu entry for UEFI Firmware Settings
완료되었습니다
linux-modules-5.11.0-36-generic (5.11.0-36.40~20.04.1)를 제거합니다...
Processing triggers for man-db (2.9.1-1) ...

 

3. check jenkins* files
dcans@dcans:~$ sudo find / -name 'jenkins*'
/var/crash/jenkins.0.upload
/var/crash/jenkins.0.uploaded
/var/crash/jenkins.0.crash
/var/cache/apt/archives/jenkins_2.303.2_all.deb
find: ‘/run/user/1000/doc’: 허가 거부
find: ‘/run/user/1000/gvfs’: 허가 거부
/etc/apt/sources.list.d/jenkins.list
/etc/apt/sources.list.d/jenkins.list.save
dcans@dcans:~$ 

+ Recent posts