Deep Work (Chuqur faoliyat) bilan Samaradorlik oshirish

Aqliy mehnat qilivchilar – Abiturient, Student, Korxona-tashkilot ishchilari uchun    samarador ishlash, o’qish, ijod qilish haqida o’z tajribamni alishmoqchiman.

Bugungi kunda bizni asosiy ishdan chalg’itadigan ilovalar, SNSslar yoki vaziyat-holatlar juda ko’p. Quyida meni samaradorligimni tushirib yuboruvchi holat:

Ertalab 8 da Ilmiy Maqolamni oxirgi qismini yozib tugataman deb laboratoriyaga keldim. Kompyuterimni yoqqanimda oila azolarimdan kimdir telegram xabar qoldiradi yoki do’stlarim hayotida nimadir muhim ish sodir bo’ldi va ularni yonida (aqalli virtual holatda) bolishim kerak deb yozishdim. Soat 10 boldi, to’liq yozilmagan ilmiy maqolamni ochdim, kecha nima qilganimni eslashga urinayotgan mahal hayolimga yalt etib boshqa ish keldi: “Bugun, yangi semester uchun darslarni tanlab  registratsiya qilishni oxirgi kuni”. Soat 11. Maqolaga endigina qaytgandim, Professorim kirib kelib yangi loyiha rejalarini yozib berishimni so’radi. Atiga 50-60daqiqagina vaqt oldi, yozib bolib soatni qarasam tushlik vaqti kiribdi. Muhim bo’lgan biror ishni bajarmasimdan 4 soat o’tib ketdi.

Agar sizdayam yuqoridagi holat kuzatilsa demak sizni vaqtingiz Chuqur Faoliyat emas balki Sayoz (ko’p aql talab) etish bilan o’tyapti. Bu kabi yumushlar kishidan kuchli aql ishlatishni yoki kuchli diqqat etibor qaratishni talab etmaydi. Lekin bunday ishlarni hamma qila oladi.

Men  Chuqur Darajadagi Faoliyat haqida gapirib o’tmoqchiman. Biz Chuqur va Sayoz mehnat faoliyatini yaxshi ajratib olsak kunimizni 2 ga taqsimlab samaradorligimizni oshirishimiz mumkin. Chuqur faoliyatga quyidagilarni misol keltirish mumkin:

  1. Butun diqqat etibor bilan matematik bir masalani ishlayapsiz. Tashqaridagi tovush, shovqinlar ham eshitilmay qoldi. Butun miyangiz, vujudingiz faqat bir narsa – o’sha masalani yechish bilan band.
  2. Yana bir misol, siz bir mobil (android) ilova yozyapsiz – bu ishga butunlay sho’ng’ib ketdingiz. Vaqt o’tganini umuman sezmayapsiz, tushlik vaqti esa allaqachon o’tib ketdi.
  3. Siz Keyingi haftada boladigan dars uchun taqdimot (prezentatsiya) tayyorlamoqchisiz

Olimlarning tadqiqot natijalariga ko’ra yuqoridagi kabi chuqur darajadagi faoliyat yuritgan insonlarni diqqat etibor (focus) qilish qobiliyati va iroda kuchi muntazam ravishda oshib boradi. Eng qizig’i bunday faoliyat yurituvchilarga dam olishdan ko’ra ko’proq chuqur faoliyat yoqimliroq, afzalroq bo’la boshlaydi. Bu esa har qanday insonni oz ishini ustasi – mohir bolib yetishishiga olib keladi.

Siz ham sinab ko’rmoqchimisiz, yangi boshlayotganlar uchun kunda bir soat tavsiya etiladi, keyinchalik 4 soat DEEP WORK qilish mumkin. Masalan: ertalabki 8 va 9 orasini ozinigizga Chuqur darajada faoliyat yuritish uchun belgilab oling. Bu paytda telefonlar o’chirilgan, stakanlar kofega to’ldirilgan va eshiklar qulflangan holatda atrofda sizni chalgitadigan hech kim yoq muhitni yarating. Bir soat mobaynida hech narsa qilmayman faqat shu ishni qilaman deb maqsad qo’yib boshlang. Agar boshidagi 10-20 daqiqada butun diqqat etiboringizni jamlab shu ishga qaratsangiz  allaqachon bu ishga sho’ng’ib ketasiz va davom etish to’xtashdan ko’ra osonroq boladi. Har 30 daqiqada 5 daqiqa tanaffus olish tavsiya etiladi.

Yuqoridagilarni biz allaqachon bilsakda amal qilmaymiz, yoki unutib quyganimiz uchun har 5-10 daqiqada facebook, telegram yoki emailni tekshiradigan odat chiqarib olganmiz. Chuqur darajadagi mehnat faoliyatini yetarli etibor bermasdan qilganimiz uchun natijamiz ham kayfiyatimiz ham yomon. Shuning uchun ish kunimizni 2 ga bolib, chuqur faoliyat va sayoz faoliyatlarni belgilangan vaqtida qilishimiz bizni ayni vaqtdan bir necha barobar samarali qiladi. Agar bu odatni muntazam davom etib har hafta 1% ga samaradorlikni oshirishga erishsak bir yilda (52) haftada samaradorligimiz 100% dan 168% yetadi. Besh yilda esa 1329% ga oshirsak boladi. Xulosa uchun judayam sodda misol keltirsak bu degani hozirgi holatimizda 1 yillik pulni/obro’ni/ilmni o’sgan holatimizda 1 oyga qolmasdan topish mumkinligini bildiradi.

 

 

 

 

 

 

 

Preprocess (Convert/Clean/Adjust) MOT17 Det dataset annotations to YOLOv2 format

MOT17 Det is a dataset for people detection challenge from MOT  (https://motchallenge.net/data/MOT17Det/). It contains 14 videos under different lighting, view, weather conditions, 7 of them are training set and another 7 are used as test set. This dataset, MOT 17Det is the improved version of MOT 16 (https://arxiv.org/pdf/1603.00831.pdf).

 

Dataset Statistics

According to https://arxiv.org/pdf/1603.00831.pdf, MOT 16 contains ~320,000 person annotations (Pedestrian + person_on_vehicle + static_person) Table 3.  It also contains distractor class(statues, mannikin) and reflection class(reflection of people in the mirror). These two classes could be ignored by detector, for example, if detector detects them we do not say it is false detection and if detector misses them, we do not say misdetection. That way detector can learn from only ‘clean’ annotations.

Table3.png

This dataset annotation is diferent from YOLO annotations in three ways:

  1. It contains whole video annotation in a single file
  2. It contains 12 classes
  3. its annotations are in [frm_id,seq_id,xmin,ymin,w,h,confidence,class,visibility] and not in [relative_x, relative_y, relative_w, relative_h] format.

This repo contains my script that will convert MOT17 Det annotations to YOLO format. https://github.com/Jumabek/convert_MOT16_to_yolo

I converted pedestrian, person_on_vehicle and static_person as a positive class (labeled as 0). Distraction and reflection classes are converted as don’t-know class (labeled as ‘-1’). You should customize YOLO to ignore examples with ‘-1’ class while computing the loss.

Note ‘-1’ class is neither negative nor a positive class. Hence, we should ignore those kinds of objects when computing the loss/cost-function.

Train Loss & Learning rate (on YOLOv2 )

 

Credit to Standford cs231n course

 

During training any deep learning model, it is vital to look at the loss in order to get some intuition about how network (detector, classifier and etc.) is learning. For example, if you look at the Figure below, training loss for people detector that I am training already stopped decreasing even if it is only in the initial stages of the training. Usually, in the initial stages, it is common to see a loss decreasing very fast and smoothly.  Since that is not a case here, we can conclude that something is wrong here with Learning-Rate. Learning rate is the parameter that decides how big step network should take when searching for an optimal solution.

biglearningrateproblem
Figure1

If your learning rate is big for your task(dataset) then the case something like below happens. Here network cannot make the small change that is needed to optimize because provided learning rate is too big.

So the loss function that varies a lot and does not decrease is an indication that our learning rate is big. For the training setting above my learning rate was learning_rate=0.0001  . 

Let’s see what would happen if we increase learning rate 3x  (learning_rate=0.0003). Okay, let’s see what our loss function looks like when we have 3x bigger learning_rate (Figure 2).

3xlearning_rate
Figure2: Loss Function for the same architecture as above but 3x bigger learning_rate (learning_rate=0.0003)

Oh boy, that doesn’t look good, does it?  After 160 iterations it starts increasing and then later until 200 iteration network tries to go back to track to – search for the parameters that would minimize loss, but because of too big learning_rate it fails and loss starts increasing again after 200 iterations (Figure 2).

At around 290 iterations there is no point to continue because loss is going towards +infinity (Figure 3)

3xlearning_rate_original
Figure 3: Loss Function for the same architecture as in Figure 1 but 3x bigger learning rate (learning_rate=0.0003)

Takeaway lesson is: when you have slightly large learning_rate for your dataset/task then you see your loss will stop decreasing in the beginning of the training (Figure 1). But if you give too big learning_rate you will have a problem where your loss starts increasing instead of decreasing (Figure2, Figure3).

As we said above the problem in in Figure 1 was big learning rate, so to show you the big learning_rate problem, we tried 3x larger learning rate and see the loss in Figure 2 and Figure 3. Here in Figure 4 you see the the loss function when we provide 3x smaller learning rate compared initial learning_rate (learning_rate=0.0001). Our new learning_rate becomes new learning_rate = 1/3*0.0001 and when we train our network with this learning_rate we see stable loss decrease in the Figure 4 compared to in Figure 1. 3xsmaller_learning_rate.png

In practice, I will try 3-5 learning rate (for example 0.001,0.001*3,0.001*3*3,0.001*3*3). Train for about 1000 iterations, compare their loss, and choose the best learning rate to use during the whole training.

That concludes our explanation about Loss and Learning rate. BTW, the reason I am decreasing or increasing learning_rate by the factors of 3 is because it is  a rule of thumb in machine learning when searching for the right learning_rate to increase or decrease the learning_rate by factor of 3.

How to train YOLOv2 on custom dataset

Update 1: I found way better article on how to train YOLOv2  here

 

YOLOv2 is open source state-of-the-art real-time object detector that is written on deep learning framework darknet in C language https://pjreddie.com/darknet/yolo/ . Simple guide to reproduce results in the YOLOv2 paper is provided at author’s blog.

To train on custom dataset some elegant instructions are given on windows port of the YOLO at https://github.com/AlexeyAB/darknet .

Here I will show hands on approach to train YOLOv2 detector (If you cannot see the images clearly, please zoom in the browser)

Task

Detect/Count 8 types of Beverage Bottles: pepsi,7up, mirinda, dolina_olma, dolina_behi, dolina_olcha, dolina_limon, dolina_apelsin

Dataset Preparation

As said above I use PEPSI dataset, it contains around 150 images, even though it is small, for our example during this post that should be enough.

I annotated the dataset using YOLO-MARK annotation tool.  For a tutorial on this visit here https://github.com/AlexeyAB/Yolo_mark .

Put all the class labels into obj.names  file. So content looks like this:

Untitled

Then start the program and start labeling:

Untitled.png

 

As a result of annotation we will have corresponding .txt file for each images where *.txt file contains YOLO format annotations

Untitled.png

 

next I moved all the *.txt files and put them into labels folder and rename the img  folder to images

So now my folder looks like this

Untitled

Since we changed the img folder name to images folder name, now we have to to change train.txt accordingly.

Untitled.png

One last step is to put full paths to images instead of relative paths. Because later darknet will access this file from outside.

Untitled.png

Training

Network CFG

  1. copy  yolo-voc.cfg from https://github.com/Jumabek/darknet/blob/master/cfg/yolo-voc.cfg and rename it as pepsi.cfg
  2. change filters=125 in last convolutional layer to filters=65      which is (5+8)*5.  Here first 5 corresponds to (x,y,w,h,objectness_score), 8 corressponds to number of classes, in my case I have 8 classes. Last 5 corressponds to number of BoundingBox predictions for each cell.
  3. change classess=20 to classess=8

Now this is how our cfg file looks like

Untitled.png

I have 4 GB GTX 1050 GPU on my laptop, so I set batch=64 and subdivisions=8. That way my GPU will process 64/8 = 8 images in one pass. Lets say if you have 8GB GPU memory then you can set batch=64 and subdivisions=4. In order to take advantage of all of your gpu memory in order to speed up the training

Untitled

Creating *.data and *.names files

  1. Copy obj.names and obj.data files (that we created in Data Preparation step with YOLO_MARK) to C:\darknet\build\darknet\x64\data
  2. rename obj.names to pepsi.names
  3. rename obj.data to pepsi.dataUntitled.png
  4. Fix the paths in pepsi.data to point to right files as followsUntitled.png
  5. Download darknet19_448.conv.23 pre-trained weights from https://pjreddie.com/media/files/darknet19_448.conv.23 and put into C:\darknet\build\darknet\x64\backup folder

Finally Start training

darknet.exe detector train data/pepsi.data cfg/pepsi.cfg backup\\darknet19_448.conv.23 >> pepsi.log

Training log will be saved in pepsi.log file, so you can monitor loss, recall and other things by accessing this file.

Untitled.png

Enjoy your cup of coffe and come back later 🙂

Important: I try making tutorial on how to get best out of YOLO training in another post. So, stay tuned!

 

KNUT universitetiga kirish uchun ariza to’ldirish

Odatda Janubiy Koreyada ko’p uiniversitetlarga kirish uchun ko’rsatilgan formatdagi blankani to’ldirish kerak bo’ladi.

Men bu yerda Korean National University of Transportation(KNUT) ga to’ldiriladiga ariza varaqasini tushuntirib o’taman.

Ariza quyidagi formatda. Asl nusxasida Faqat koreyscha va inglizcha malumotlar berilgan. Men tushuntirish uchun o’sha har bir qadamni o’zbekchada tarjima qilib, tushuntirib boraman.

CheckList da sizga qaysi hujjatlar kerak ekanini aytishadi. Shu orqali siz o’zingizni tekshirib olasiz, yani barcha hujjatlarni tayyorladimmi yo’qmi deb.

Demak Quyida 1 dan 11 gacha ko’rsatilgan hujjatlar [Application Documents] haqida ma’lumot berib o’taman.

  1. Application Form (Bu ariza blankasi.) Buni oxirida batafsil tushuntiraman.
  2. Official (Prospective) Graduation Certificates from undergraduate institution – Rasmiy (kelajakdagi) Bitiruv sertifikati, siz bakavrlikka o’qigan universitetdan. Agar hali bitirmagan bo’lsangiz bitirishi kutilayapti yani “expected graduation”  degan xujjat olishingiz kerak bo’ladi. Buni Xalqaro bo’lim muammosiz qilib beradi. Aks xolda kerakli joyga shikoyat qilishingiz mumkin. Agar allaqachon Universitetni bitirgan bo’lsangiz u xolda diplomni notariusdan o’tkazilgan nusxasini yuborasiz(Aslini emas).
  3. Official (Prospective) Graduation Certificates from graduate institution . Bu ham yuqoridagi bilan deyarli bir xil, lekin bu siz bakalvlik olgan universitetdan emas magistrlik olgan universitetdan beriladi (Bu faqat doktarantura yani PhD ga topshirayotgan talabalar uchun).
  4. Official Transcripts from undergraduate institution – Bakalavrlik diplomi ilovasini rasmiy shakli. Buni ham diplom bilan birga notariusdan o’tkazilgan nusxasini jo’natasiz. Agar hali bitirmangan bo’lsangiz unda zachotkadagi baholarni word dokumentga kiritib, xalqaro bo’lim va kadrlar bo’limidan pechat urdirasiz va koreyaga junatasiz.
  5. Official Transcripts from graduate institution if available  – Bu holat xuddi 4-dagi kabi, faqat bu magistrlikka bitirganlar uchun
  6. Release of Information Form (Please fill it out in English.)  – Bu haqida post davomida keyinroq to’xtalib o’tamiz

  7. Language Proficiency Certificates –  Ingliz yoki Kores tili haqidagi sertifikatlaringizni original nusxasini jo’natishingiz kerak. Misol uchun siz IELTS olgan bo’lsangiz, British Councilga telefon qilib yoki say orqali  buyurtma berasiz. Buyurtmada esa Siz topshirayotgan koreyadagi universitet manzilini ko’rsatasiz.
  8. Study Plan (form #2) – Bu yerda siz o’quv mobaynida qiladigan ishlaringiz haqida gapirib o’tasiz. Odatda bu yerda qilinishi kutilayotgan izlanishlar (Research)  haqida yozasiz. Xavotir olmang, mukammal bo’lishi shart emas. Aslida mafgistrga topshirayotganlardan research olib borish qobilayati talab qilinmaydi. Aksincha buni magistrlik o’qishi davomida talaba o’zlashrtiradi. Lekin shunda ham qo’lingizdan kelganicha siz qilmoqchi bo’lgan ish haqida, qiziqishlaringiz haqida yozishingiz kerak.
  9. A copy of your passport and Certificate of Alien Registration – passport nusxasi ni hamma jo’natadi, lekin siz koreyada allaqachon yashayotgan bo’lsangiz sizga Alien Registration ID berilgan bo’ladi. Ana o’sha ID ni nusxasini ham junatasiz.
  10. Documents that certify the applicant and the applicant’s parents’ citizenship and relationship – Bu yerda siz o’z ota onangizni pasport nusxalari, va to’gilganligingiz haqidagi guvohnomani tarjima qildirib notariusdan o’tkazasiz. Bu hujjatlar orqali koreaydagi qabul kommisisyasi (admission office) sizni millatingizni aniqlashadi.
  11. Applicant’s or Sponsor(Parents)’s VOD(Verification of Deposit) indicating that more than $18,000 USD in funds – Bankdagi $18.000 miqdorda pul borligi haqida sertifikat.

 

 

CHECKLIST for Application Documents (Graduate School)

<Please organize your documents by the following order and submit them.>

접수방법: □ 방문(Visit) □ 국내우편(Domestic mail) 해외우편(International mail)

  1. 이름 (Name): (family/last name) (given/first name) 한글 이름:

  2. 국적 (Country): 거주지 (Residence): □ 국내(in Korea) □ 해외(Abroad)

  3. 지원 학과 (Desired Department): 지원 과정(Degree Program) : □ Master’s  □ Doctoral

(*Please tick (√) in the appropriate box, and attach all the required documents in order listed.)

Application Documents

Submission

(Y/N)

Remarks

Check List

Yes

No

1. 입학원서(붙임1) Application Form (Form #1)
2. 대학교 졸업(예정)증명서

Official (Prospective) Graduation Certificates from undergraduate institution

* 주재국 한국영사 또는 자국 공관 영사확인/ 아포스티유증명서 /

중국교육부 학력∙학위인증보고서 (해외 대학 졸업자)

3. 대학원 졸업(예정)증명서

Official (Prospective) Graduation Certificates from graduate institution

* 주재국 한국영사 또는 자국 공관 영사확인 / 아포스티유증명서 /

중국교육부 학력∙학위인증보고서(해외 대학원 졸업자)

4. 대학교 성적증명서 Official Transcripts from undergraduate institution
5. 대학원 성적증명서 Official Transcripts from graduate institution if available

* 박사과정 지원자에 한함

6. 학력조회확인서 / Release of Information Form (Please fill it out in English.)

: Please do not fill out applicant number.

7. 언어능력 증빙서류 Language Proficiency Certificates

* 언어능력 자격요건 미충족 시 우수 연구자 추천서를 반드시 제출하여 함(form #3)

8. 수학계획서 Study Plan (form #2)
9. 여권사본 외국인 등록증

A copy of your passport and Certificate of Alien Registration

10. 본인과 모의 국적 가족관계증명서

Documents that certify the applicant and the applicant’s parents’ citizenship and relationship

(중국국적 지원자: 본인 가족 신분증, 호구부, 가족관계증명서)

11. 본인 또는 직계가족의 예금잔고증명서 원본(US $18,000 이상)

Applicant’s or Sponsor(Parents)’s VOD(Verification of Deposit) indicating that

more than $18,000 USD in funds

[Form 1]

Graduate School Office

Korea National University of Transportation

50 Daehak-ro, Chungju, Chungbuk 380-702

Republic of Korea

Phone: +82-43-841-5036, Fax: +82-43-841-5038

E-mail: knut2@ut.ac.kr

COLOR

PHOTO

3cm x 4cm

(Davomi Bor)

QuickSort-TezkorSaralash

Xuddi mergesort kabi quicksort ham parchala-va-yeng (divide-and-conquer) usulini ishlatadi, demak bu rekursiyali algoritim. Lekin quicksort bu usuldan sal boshqacha maqsadda foydalanadi. Mergesortda bo’lish bosqichi hech qanday ortiqcha ish talab qilmaydi, asosiy ish jamlash(merge) bosqichida bajariladi. Quicksort da esa aksincha: asosiy ish parchalash bosqichada bajariladi. Jamlash bosqichida esa hech qanday ish bajarilmaydi.

Quicksort ni mergesort dan  boshqa farqlari ham bor. Quicksort o’z o’rnida ishlaydi yani xotirada boshqa massiv(array) ochishga hojat yo’q. Bu esa katta hajmli massivlarni saralashda asqotadi, chunki mergesort xotiradan O(n) qo’shimcha joy talab qilsa, quicksortga umuman qo’shimcha joy kerak emas.  Shuningdek, eng yomon-holatda(worst-case) ishlash-vaqti (running time) O(n^2). Lekin odatda ishlash vaqti O(n lgn), xuddi mergesort kabi. Savol: nega bazi dasturlarda xotira muammo bo’lmasada quicksort ni ishlatamiz, mergesortni emas? Chunki amaliyotda quicksort ham mergesort ham k*O(n) vaqt da ishlaydi. Lekin quicksortdagi koeffitsient ‘k’ mergesortnikidan ancha kichik. Shu sababdan quicksort mersortdan yaxshi ishlaydi. (Davomi Bor)

 

 

Lopital qonuniyati

Injinerlikda har-xil limit ifodalarni hisoblashga to’gri keladi. Bunday limitlar orasida esa bazida limitlarga tegishli “bo’linmani limiti” yoki “yig’indini limiti” kabi oddiy qoidalar kor qilmaydi.

Misol:

Bo’linmani limiti.

 

Yig’indini limiti:

 

Afsuski, yuqoridagi usullarni qo’llab quyidagi limitni hisoblab bo’lmaydi.

lopital_example

Agar “Bo’linmani limiti” qonuniyatini qo’llaydigan bo’lsak, kasrni surati ham maxraji ham cheksizlikka teng. Har qanday sonni o’ziga bo’lsa “1” bo’ladi lekin checksizlikni cheksizlikka bo’lish mumkin emas.

Shu yerda bizga L’opital qonuni qo’l keladi:

lopital2

biz bu yerda limitni suratini ham, maxrajini ham hosilasini olib keyin limitni hisoblaymiz.

Lopital qonuniyati:

Agar limitni surati ham maxraji ham ‘0’ yoki ” ga teng bo’lsa, u holda kasrni suratini surat hosilasi bilan, maxrajini maxraj hosilasi bilan almashtirib limitni hisoblash mumkin.

Isbot:

Aytaylik bizda quyidagi funsiyalarni hosilasi “0” ga teng:

yani:

 aytaylik                    (1)

 

U holda bu ikkala funsiyani bulinmasidan iborat bo’lgan limit quyidagicha hisoblanadi:

buladi.png

Bu yerda f(a) va g(a) funsiyalarni qiymati nolga teng (1) tenglikdan. (x-a) hadlar esa bir biri bilan qisqarib ketadi. 

Axborot Texnologiyalari: Har qanday jumboqni yechimi bor

Men dasturchi enjineerman, hech yo’q bu meni bakalvr diplomimda ko’rsatilgan mutaxasisligim.

Ayni vaqtda magistraturada o’qiganim uchun izlanish qilishim kerak, qolaversa bu meni kelajagim uchun katta ahamiyatga ega.

Bazi odamlar fikricha bir sohaga qiziqsangiz uni o’rganish oson kechadi. Lekin negadir men uchun bu hayotda tasdig’ini topmadi. Garchi o’z sohamga qiziqsamda, meni undan chalg’itadigan judayam ko’p shayton hujumlari bor:

  • Facebookda maqsadsiz vaqt sarflash
  • Internetda foydasiz malumotlarni qidirish
  • youtube da menga aloqasi bo’lmagan ma’lumot va yangiliklarni ko’rish
  • Ayrim hollarda bu meni hayosiz tasvirlarni ko’rishgacha olib boradi.

Men nega bularni sizga aytayapman?

Mana shu barcha hujumlarni qurboni bulishim bir joydan boshlanadi. Ildiz bitta. Qachonki, bir ishni qilishni xoxlaymanu, lekin qilolmayman.  O’zimni eng muhim insonlardan xisoblayman. Shu sabab qachonki, o’zimni biror bir muammo qarshisida ojiz ko’rsam, ruhiyatim judayam cho’kib ketadi.

Agar siz ham men kabi muammo dan qochib ana o’sha hujumlarni bartaraf eta olmayotgan bulsangiz, demak bu postni shu yergacha o’qib vaqtingiz zoye ketmabti.

Demak biz hal eta olmagan muammo, bizni halokat sari yetaklayapti. Menimcha bunga qarshi ikkita chora qo’llashimiz mumkin:

  1. Haqiqatni tan olmoq. Biz hech kim emasmiz – dunyoni o’zgartiradigan biz emas. Aksincha bizni va olamlarni yaratgan yagona Robbimiz biz orqali dunyoni o’zgartiradi. Ha bizni qo’limizdan hamma narsa ham kelmaydi, biz ojiz yaratilganmiz. Ishonmaysizmi unda savollarimga javob berib ko’ring. Ota onangizni siz tanladingizmi, aka-singillaringiznichi, qaysi millat bulib tug’ilishingiznichi, qachon tug’ilishni, qachon vafot etishni? Yo’q.  Lekin optimal degan tushuncha bor, ya’ni bor kuchingiz bilan yugursangiz qanchalik tez yugurasiz. Ana shu optimal tezlik deyiladi. Qo’limizdan hamma narsa kelmasa ham biz optimal darajada bu dunyoda o’z izimizni qoldirib ketishni xoxlaymiz. Bunga erishish uchun esa ikki dunyo saodatiga eltuvchi yo’lni Alloh bizga berib qo’yibti. Ana o’sha yo’riqlardan foydalanib qo’limizdan kelgan muammo, jumboqlarni hal etishimiz. Kuchimiz yetmay qolgan holda esa o’z jonimizga qasd qilish o’rniga banda ekanimizni yodimizga olishimiz aqlli kishini qarori bo’ladi.
  2. Ishni ko’zini bilish. Demak dunyoda ikki xil muammo bor: Hal etsa bo’ladigan va hal etsa bo’lmaydigan(Shu o’rinda eslatib o’tishim joizki, men inson xal qila olmaydigan muammolardan faqatgina bittasini bilaman xolos). Xal qilsa buladigan muammolarni esa optimal yo’l bilan yechish zarur. Yani men sizga Toshkentdan Istanbul ga borishga reys so’rasam –  sizda bir emas bir nechta variantlar buladi. Ana o’sha variantlardan eng optimalini tanlab menga taklif etishingiz sizni o’z ishi ustasi ekaningizdan dalolat.

 

Xar qanday masalani  uni kichik – kichik qismlarga bo’lib yechish mumkin. Qachonki, ana usha jumboqni bulaklari tushunsangiz, demak siz butun sistemani tushuna olasiz.

 

 

 

Three kinds of words in Arabic

All the words in Arabic fall into one of the three categories.

They are :

  • ISM: can be person, place, thing, idea, adjective, adverb  and more
    • Person: Muhammad Ali, Micheal Jackson
    • Place: Makkah, Namangan, Tashkent
    • Thing: Phone, TV, laptop
    • Idea: I like reading, I love swimming.
    • Adjective: describes things. E: modest girl, high mountain.
    • Adverb: describes action being done. E: working happily, eating slowly.
  • Fi’l: words that are stuck either in past, present or future tenses. Fi’ls are similar to verbs in English but not exactly the same. E: born, live, will see.
  • Harf: a word that make no sense unless there is a word after it.